cinnamon rollout posted:Well that seems easy enough, thank you Remember to disable the preprocessor if you're re-using a pose
|
|
# ? Feb 28, 2023 21:53 |
|
|
# ? May 30, 2024 01:33 |
|
cinnamon rollout posted:Well that seems easy enough, thank you Also, you can click and drag almost anything from and to Auto1111. No need to work through a save dialog, or open when moving images to ControlNet or inpaint.
|
# ? Feb 28, 2023 21:59 |
|
I appreciate all the help and it worked great!
|
# ? Feb 28, 2023 22:12 |
|
lunar detritus posted:Never mind, it's not as magical as I thought. proper use of this technology, I laughed
|
# ? Mar 1, 2023 01:21 |
|
The Sausages posted:lol Did I miss it or was this channel ever a much bigger deal in this thread? Because I first saw it on GoonTube and it's leaps and bounds more frequently hilarious for AI than anything else yet. Probably owing to everyone's familiarity of that single Simpsons scene, so there's a lot more expectations to subvert. The narration voices add a lot too. Don't sleep on this. On this livestream a poor captive AI tries over and over to recreate the 1996 Steam Hams scene, with randomized ingredients, and it never stops having the strangest of failures.
|
# ? Mar 1, 2023 06:48 |
|
Anybody else find that highres fix really wants to fill empty walls and spaces with stuff?
|
# ? Mar 1, 2023 06:58 |
|
I've made further improvements to my auto-video generation script https://www.youtube.com/watch?v=io4oXMksyM8 https://www.youtube.com/watch?v=fjLUuVv6xHo https://www.youtube.com/watch?v=ue6MUGfrMns
|
# ? Mar 1, 2023 07:02 |
|
ronya posted:Anybody else find that highres fix really wants to fill empty walls and spaces with stuff? if you did want the benefit of more hallucinating on faces etc. you can always inpaint at different strengths
|
# ? Mar 1, 2023 07:56 |
|
I'd like more controllable localised dreaming indeed, but inpainting requires overhauling the prompts to strip out stuff not masked...
|
# ? Mar 1, 2023 08:28 |
|
I still can't figure out how to use the pose thing, and of course there's zero documentation because coders are fucks. You'd think with the advent of ChatGPT they might finally actually make some basic documentation since now the major hurdle, writing the English language in a semi coherent mush, is no longer an issue. But nope! Fuzz fucked around with this message at 14:33 on Mar 1, 2023 |
# ? Mar 1, 2023 14:31 |
|
Fuzz posted:I still can't figure out how to use the pose thing, and of course there's zero documentation because coders are fucks. There is https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/md_doc_00_index.html
|
# ? Mar 1, 2023 14:49 |
|
Fuzz posted:I still can't figure out how to use the pose thing, and of course there's zero documentation because coders are fucks. I'm pretty sure it requires 1.x and doesn't work with 2.x are you using the correct checkpoint? It doesn't work with 2.1-768 for me that's for sure. My install could just be hosed too
|
# ? Mar 1, 2023 14:49 |
|
No, controlnet currently needs a 1.5 based SD checkpoint. It will error if you attempt an internal render res of > 512x512. I beleive I read that they are working on a 2.x version, though. Unrelated, but I was surprised to find that Diablo 3 is still perfectly playable while rendering on my 4090. I assumed all the resources would be dedicated to SD and it would run poorly. The it/s is still great as well. These were a batch of 20 DPM 2M Karras ++ @ 50 steps, 512x512 with a 1.5 upscale. LASER BEAM DREAM fucked around with this message at 16:42 on Mar 1, 2023 |
# ? Mar 1, 2023 16:36 |
|
I found a fun style: all from a single large batch (about 4% sample) bonus funny low quality with prompt text leak
|
# ? Mar 1, 2023 17:24 |
|
So someone came up with a Lora style overlay that trains all the layers in the Unet model using convolutional kernels. https://github.com/KohakuBlueleaf/LoCon It's still early and the example on the page aren't exactly the best but people are starting to train on it and the results I'm seeing in discords are impressive. Good chance this is Lora 2.0 I think.
|
# ? Mar 1, 2023 18:27 |
|
https://www.youtube.com/watch?v=ljBSmQdL_Ow
|
# ? Mar 1, 2023 18:31 |
|
I feel like such a poser in this scene when I look at some of the actual math going on. Like a monkey playing with an ipad.
|
# ? Mar 1, 2023 18:31 |
|
Objective Action posted:So someone came up with a Lora style overlay that trains all the layers in the Unet model using convolutional kernels.
|
# ? Mar 1, 2023 18:32 |
|
LASER BEAM DREAM posted:I feel like such a poser in this scene when I look at some of the actual math going on. Like a monkey playing with an ipad. I don't fully understand the math but I internally think about it as the image is a planet orbiting a point (star sure it's a solar system) and each prompt word is turning on a different star's gravity they all exist in set spots you are just enabling some. This is in a 3D plane. The Seed is the orbit of the planet. The negative prompt is another 3D plane with no over lap that also pulls the image. I think of it as 2 3D planes but if you can keep track of a 6D object in your head go for it. Anyone that understand that math know if that is remotely accurate?
|
# ? Mar 1, 2023 18:37 |
|
Objective Action posted:So someone came up with a Lora style overlay that trains all the layers in the Unet model using convolutional kernels. I'm not sure I understand this. Lora is cool because it can produce the desired output by only training some additional layers that can be added an already trained model through some fancy math, meaning it's faster to train, smaller, and composable. This takes that and trains all the layers so is it competing with dreambooth training not Lora?
|
# ? Mar 1, 2023 18:39 |
|
Tom Clancy is Dead posted:I'm not sure I understand this. Lora is cool because it can produce the desired output by only training some additional layers that can be added an already trained model through some fancy math, meaning it's faster to train, smaller, and composable. This takes that and trains all the layers so is it competing with dreambooth training not Lora? So Lora training basically tweaks the transformer blocks that let the model go to/from latent noise and text. The Locon approach does that but also lets you tweak the noise convolution layers as well. Effectively Lora is "please assemble parts this way" and Locon adds "here are some extra parts you can use". Kind of. Dreambooth basically takes the model and says to it "find a set of parts you already understand and rearrange them to look like this". Its almost the same but Dreambooth/Textual Inversion basically pushes all the other knowledge out of the models brain to do it.
|
# ? Mar 1, 2023 18:56 |
|
I don't think that's right about TI, TI is just training a new prompt word that is very very specifically tweaked to instruct the model how to make something given the knowledge it already has. Dreambooth training is literally adding in new knowledge.
|
# ? Mar 1, 2023 19:34 |
|
I've been playing with the new low light / contrast lora that I found on the Stable Diffusion subreddit yesterday. It produces some pretty decent results! Also, for some fun, have SD make things that involve text. The results are pretty funny (for now). No Getting there Almost got it!
|
# ? Mar 1, 2023 21:42 |
|
They should sell t-shirts with garbled text slogans on designed with Stable Diffusion so CCTV recordings of you look fake and you can get away with crimes
|
# ? Mar 1, 2023 21:59 |
|
https://www.youtube.com/watch?v=ptEZQrKgHAg
|
# ? Mar 1, 2023 22:00 |
|
For text, gen the images to get composition, make an image of the text you want in the right place for that composition, and then regenerate the image using that text image as a control net.
|
# ? Mar 1, 2023 22:04 |
|
Tom Clancy is Dead posted:For text, gen the images to get composition, make an image of the text you want in the right place for that composition, and then regenerate the image using that text image as a control net. Thats so funny, I just made this in reponse to a drawing by Saddest Rhino in PYF
|
# ? Mar 1, 2023 22:09 |
|
Man, I just got access to the beta of that Bing chat and they really crippled the hell out of that thing, huh. Sometimes it'll give you like half a visible answer, then blank it real quick, then gave an "I can't answer that" answer. Then when I asked it what happened, it started to answer like three times and finally just gave me a blank box. Then made me hit the broom to clear the conversation. You can search for like, basic information with it but I'm not sure it has that much over just using google or whatever anymore.
|
# ? Mar 2, 2023 02:30 |
|
I don't have acces, bit apparently Bing has an option to set how "creative" it's with the answer. Maybe yours set to giving only high confidence/not creative output?
|
# ? Mar 2, 2023 02:33 |
|
I normally don't have enough of an idea to make posing worthwhile but I had an idea
|
# ? Mar 2, 2023 02:38 |
|
Clown egg?
|
# ? Mar 2, 2023 02:41 |
|
So what's the new easy way to install Automatic1111? I think I need to do a fresh install, mine is being all fucky lately.
|
# ? Mar 2, 2023 03:02 |
|
Fuzz posted:So what's the new easy way to install Automatic1111? I think I need to do a fresh install, mine is being all fucky lately. https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre Never tried this but it looks pretty straight forward. I just reinstalled because the AUTO-MBW extension linked a few pages ago broke everything since I use --medvram but I just git clone'd into a new folder then manually copied over the model files.
|
# ? Mar 2, 2023 06:30 |
|
I sure appreciate all the progress that's being made, but keeping up with all the new and exciting stuff that's getting released at breakneck speeds is starting to feel like a full time job.
|
# ? Mar 2, 2023 07:29 |
|
Speaking of which https://multidiffusion.github.io
|
# ? Mar 2, 2023 09:01 |
|
woah that's nice
|
# ? Mar 2, 2023 09:31 |
|
https://www.youtube.com/watch?v=RWvKpEYvo4s
|
# ? Mar 2, 2023 10:08 |
|
Fuzz posted:So what's the new easy way to install Automatic1111? I think I need to do a fresh install, mine is being all fucky lately. My instance went fucky as well and finally died and couldn't be resurrected so I tried https://github.com/EmpireMediaScience/A1111-Web-UI-Installer, and it installed and works fine* on my windows 10 laptop. Make sure you back up and re-use your ~\stable-diffusion-webui\models\ folder! Unless you want to wait while everything downloads the first time you use a model, not just the stable-diffusion models but a whole bunch of features use models e.g. upscaling, depth, controlnet etc. Luckily I learnt that lesson the last time before all these extensions were a thing. *worked fine up until I deleted the old instance, turns out the previous installer I had used had done some janky setup poo poo to kludge python into working. Deleting a wayward registry key fixed the issue. Any issues I had were caused by the previous installer (camenduru's offline installer from way back in Nov 2022), not this one.
|
# ? Mar 2, 2023 12:26 |
|
can we make this the background image for the thread somehow?
|
# ? Mar 2, 2023 14:31 |
|
|
# ? May 30, 2024 01:33 |
drat it astral rip off xen's thread banner concept
|
|
# ? Mar 2, 2023 14:40 |