|
Dr. Video Games 0031 posted:I'll give this a shot, but what is this actually doing? Are there any side effects? My 1660 super doesn't support half-precision floating point numbers so "--precision full --no-half" forces it into full precision floating point numbers. It takes up more space in vram but it works just fine. "--no-half-vae" does the same but for hypernetworks. There are slight variations in the output, but overall it'll still be the same picture.
|
# ? Oct 19, 2022 22:26 |
|
|
# ? May 29, 2024 21:41 |
|
Fuzz posted:So it's all porn, then. the trinart models, waifu-diffusion (at least 1.2, idk about 1.3), and one of the several leaked novelai models were all finetuned on only sfw anime stuff however there is nudity in the base LAION5B data set which was for stable-diffusion so that kind of stuff can still show up even if it doesn't exist in the finetune dataset. Dr. Video Games 0031 posted:I'll give this a shot, but what is this actually doing? Are there any side effects? The side effect to --no-half and --no-half-vae is that they double the VRAM usage of certain categories of data by forcing it to use full-precision instead of half-precision floats. It's needed on some older graphics cards (like my 1650 super) which do not have the right hardware support for half-precision floats and produce poo poo like black or green squares. The doubled vram usage is kind of annoying because it means I also need to use --lowvram instead of --medvram to bring it back under control, which slows generation down. RPATDO_LAMD fucked around with this message at 23:05 on Oct 19, 2022 |
# ? Oct 19, 2022 22:58 |
|
https://twitter.com/_akhaliq/status/1582825597059104769
|
# ? Oct 20, 2022 02:56 |
|
Just upgraded from a 3080 Ti to a 4090. To get the 4090 to run correctly in SD, I had to update the CUDNN DLLs in torch. After that, the speedup with no further optimizations (just basic-rear end SD) is roughly 30 - 40%. 40 Euler a samples, batch size 16x1, 512x512 image size on the 3080 Ti gave me 12.6 it/s, and the 4090 gives me 16.7 it/s on average across the whole job. The 2x3 batch size, 1024x1024 image size VRAM stress test gave me 0.71 it/s on the 3080 Ti and 0.98 it/s on the 4090. The uplift doesn't quite live up to the boost to raw compute the 4090 has over the 3080 Ti, which is something i heard about in advance. However, after this test, I ran it with xformers (described here—i had to update python then nuke and redownload venv via webui launcher before downloading xformers) and got 21.34 it/s in the 16x1 test. And the 1024x1024 2x3 test completed at a speed of 2.34 it/s, a 3.3x boost over the 3080 ti. I didn't use xformers with the 3080 ti so this isn't a true apples to apples, but I'm quite happy with this performance. edit: the algorithm be making these foxes yiff something fierce though (I wasn't using the argument above for these tests) Dr. Video Games 0031 fucked around with this message at 05:08 on Oct 20, 2022 |
# ? Oct 20, 2022 03:35 |
|
Yep here we go
|
# ? Oct 20, 2022 03:48 |
|
Dr. Video Games 0031 posted:Just upgraded from a 3080 Ti to a 4090. To get the 4090 to run correctly in SD, I had to update the CUDNN DLLs in torch. After that, the speedup with no further optimizations (just basic-rear end SD) is roughly 30 - 40%. 40 Euler a samples, batch size 16x1, 512x512 image size on the 3080 Ti gave me 12.6 it/s, and the 4090 gives me 16.7 it/s on average across the whole job. The 2x3 batch size, 1024x1024 image size VRAM stress test gave me 0.71 it/s on the 3080 Ti and 0.98 it/s on the 4090. ohhhhh drat, I didn't even know about this. Since it was still faster than my 2070s I just thought "well, I guess its an upgrade, thought it'd be more but whatever." But this made a world of difference
|
# ? Oct 20, 2022 04:48 |
|
Yeah, before updating cuDNN, 16x1 jobs completed at around 10.5 it/s, for reference. Basically, step 1 is making sure you have the latest python installed because some things won't work if you don't. Then nuke the venv directory in your webui folder and run the webui-user.bat to redownload it (note: this doesn't need to be done if you already installed it with python 3.10, i think—I had to because I was still on 3.8). Close the webui program once you do, then start up the command prompt in the webui folder. run venv\scripts\activate.bat, which will put you into the venv prompt. There, execute "pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl". Then download the latest cuDNN DLLs from Nvidia (someone also posted them in the github thread linked above) and copy them into "stable-diffusion-webui-master\venv\Lib\site-packages\torch\lib". Then edit the webui-user.bat file to include the "--xformers" argument. After all of that, the 4090 goes brrr. xformers I think is supposed to bring a nice speedup to most nvidia GPUs, though I'm not sure how it works or what exactly it does so YMMV.
|
# ? Oct 20, 2022 05:20 |
KakerMix posted:Yep here we go quote:*All music owned by Mubert Inc. Please visit https://www.mubert.com/ for commercial licensing inquiries.
|
|
# ? Oct 20, 2022 05:27 |
|
FFT posted:Choo choo, new copyright hullabaloo *has mp3* Yeah I made this on my computer :))))))))
|
# ? Oct 20, 2022 05:46 |
|
FFT posted:Choo choo, new copyright hullabaloo From the replies it sounds like the program just strings sound clips together so if they've got the copyright on those clips, it probably is actually all their intellectual property on the output as well.
|
# ? Oct 20, 2022 06:00 |
|
Yeah, calling it "music generation" is way overselling what this thing seems to be.
|
# ? Oct 20, 2022 07:01 |
|
besides, AI music generation was already perfected over a decade ago https://www.youtube.com/watch?v=mg0l7f25bhU
|
# ? Oct 20, 2022 07:03 |
|
procedural music is old, check out these masterpieces I made with Fractmus https://www.youtube.com/watch?v=xBaA-iTYwi4
|
# ? Oct 20, 2022 09:19 |
|
Dr. Video Games 0031 posted:Yeah, before updating cuDNN, 16x1 jobs completed at around 10.5 it/s, for reference. Really want something that can do multiple frames per second. Then you could drag a weight slider between "bear" and "gorilla" and see it transform in real time
|
# ? Oct 20, 2022 11:44 |
|
Rutibex posted:procedural music is old, check out these masterpieces I made with Fractmus gonna add this to my bang list, right after hudson mohawke
|
# ? Oct 20, 2022 12:32 |
|
My DnD group is doing a Halloween campaign. So, I tried to generate a character and backstory with stable diffusion and GPT-Neo. The results are, I think, really good. This is my AI generated fairy clown bard, Coconut And his 99% AI generated (it got names wrong a few times, and I stitched together the results from a couple prompts) backstory quote:“The Underdark” was the name given to a region of dark and evil-filled, nonhuman lands filled with things beyond human comprehension. As the population of humans has increased, so has the influence of these “others” on society. Many people have sought out a place like home away from home; a world where the problems of life are not felt every day. Some go so far as to seek out such places only to be disappointed when they learn that the land is actually filled with creatures of evil and dark. It should come as no surprise then that people who choose to venture into this world may end up finding themselves facing off against something more sinister than anything that has yet been encountered by mankind.
|
# ? Oct 20, 2022 12:52 |
|
I trained a chatbot to act like my D&D character. It got interesting https://beta.character.ai/
|
# ? Oct 20, 2022 12:58 |
|
So I finally got around to installing the automatic111 UI instead of just doing everything command line and holy poo poo is this easier and faster.
|
# ? Oct 20, 2022 15:43 |
|
Whelp someone is using the Ghibli Studio Model I made to make NFTs. They look really lovely too because they are using it on existing lovely NFT art.
|
# ? Oct 20, 2022 15:46 |
|
oh no this thread has been a mistake
|
# ? Oct 20, 2022 16:52 |
|
i love how DALLE just always knows what you mean when it comes to pop culture "Betty and Veronica dressed as punk rockers, comic book cover" is all i had to use to get these
|
# ? Oct 20, 2022 17:46 |
|
drat it apparently automatic111 just got milkshake ducked and he's actually a huge racist who makes Rimworld mods to remove all the non-white people and to add an ultra aggressive faction called "Peaceful Protestors" with George Floyd as a faction leader. https://www.reddit.com/r/StableDiff...share&context=3 Every loving time....Back to command line poo poo I guess.
|
# ? Oct 20, 2022 18:01 |
|
"Courtney Love passed out backstage, realistic photograph"
|
# ? Oct 20, 2022 18:08 |
|
runwayML released the stable diffusion 1.5 model https://huggingface.co/runwayml/stable-diffusion-v1-5. As far as I can tell this doesn't have any NSFW stuff removed like emad mentioned there would be.
|
# ? Oct 20, 2022 18:12 |
|
What exactly is the dynamic here? Why was this model not published through the same org as previous versions?
|
# ? Oct 20, 2022 18:19 |
|
BoldFace posted:What exactly is the dynamic here? Why was this model not published through the same org as previous versions? There's a lot of hand-wringing about this on Reddit right now and nobody really knows. I'd give it a day or two to let it shake out before downloading the new checkpoint.
|
# ? Oct 20, 2022 18:19 |
|
Well I'm downloading it in case it gets taken down later
|
# ? Oct 20, 2022 18:25 |
|
RunwayML is one of the companies funding Stable Diffusion, so you can probably trust it. The reason people weren't sure before is because there was no official announcement about it. But they made a tweet now. https://twitter.com/runwayml/status/1583109275643105280?s=20&t=MjgFOVcxSOOVxornnVqmjw
|
# ? Oct 20, 2022 18:38 |
|
IShallRiseAgain posted:runwayML released the stable diffusion 1.5 model https://huggingface.co/runwayml/stable-diffusion-v1-5. As far as I can tell this doesn't have any NSFW stuff removed like emad mentioned there would be. It looks like they have some kind of connection with stability AI, as Emad mentioned them on the official discord back in August Apparently they had something to do with training all the previous stable-diffusion models too? Hoewver, there has been no official communication about this 1.5 release. It was already privately available, and in fact RunwayML released an inpainting model based on a finetune of v1.5 2 days ago. So maybe it's just some kind of communication fuckup with a "private partner" releasing the v1.5 weights publically before Stability could. Mr Luxury Yacht posted:drat it apparently automatic111 just got milkshake ducked and he's actually a huge racist who makes Rimworld mods to remove all the non-white people and to add an ultra aggressive faction called "Peaceful Protestors" with George Floyd as a faction leader. Shoulda seen it coming I guess, the guy came right outta 4chan stable diffusion threads. That's also why all the text guides people link to on rentry etc are full of anime porn and calling everyone retards... they're written by the 4chan threads. e: here's a screenshot cus reddit keeps making their new-layout website worse if you're not logged in and on the app and loving up the links to not actually take you where they link to thanks reddit user "dog fucker" RPATDO_LAMD fucked around with this message at 18:46 on Oct 20, 2022 |
# ? Oct 20, 2022 18:39 |
|
loving yikes
|
# ? Oct 20, 2022 18:48 |
|
Is there an alternative GUI implementation that isn't created by a racist dipshit and also probably isn't going to eat my computer with malicious code? It's a shame, I was never able to get the half precision thing working on the command line (always got type errors even if I followed the guides exactly) and it was nice to have a version of SD that took a hell of a lot less time to generate images.
|
# ? Oct 20, 2022 19:23 |
|
lmao. well should've known he's up to no good when all the examples were creepy anime and furry poo poo. You can still use the build, it's not like he's getting money from you. Or fork it. I just don't feel like janitoring this poo poo.
|
# ? Oct 20, 2022 19:26 |
|
You're not paying him any money and if any of the code was maliciously inserting swastikas or whatever it would've been noticed by now. Just use it for your own ends.
|
# ? Oct 20, 2022 19:30 |
Mr Luxury Yacht posted:Is there an alternative GUI implementation that isn't created by a racist dipshit and also probably isn't going to eat my computer with malicious code? https://github.com/invoke-ai/InvokeAI is an alternative, but features take longer to be implemented compared to automatic.
|
|
# ? Oct 20, 2022 19:35 |
|
Elotana posted:You're not paying him any money and if any of the code was maliciously inserting swastikas or whatever it would've been noticed by now. Just use it for your own ends. Use it to make memes about how much he loving sucks.
|
# ? Oct 20, 2022 19:37 |
|
Came up with this using dalle awhile ago. Don't remember the prompt specifically, there was an "in the style of..." I put in there but the interesting thing is I didn't describe the expression on his face. I just put something like "Homer sitting on his car staring up at the night sky" Trent Reznor in his twenties getting a hug and getting told everything is going to be ok
|
# ? Oct 20, 2022 19:39 |
|
Now there's a legal takedown notice on the SD 1.5 repo page. Seems like one of StabilityAI's (ex-?) partners released the 1.5 model without permission.
|
# ? Oct 20, 2022 19:49 |
|
Stability AI has issued a DMCA takedown against the Runway ML repo with the 1.5 model, grab it while you can. https://huggingface.co/runwayml/stable-diffusion-v1-5/discussions/1
|
# ? Oct 20, 2022 19:49 |
|
Fuzz posted:Use it to make memes about how much he loving sucks. A man in a KKK robe on fire flailing his arms, lovely, ugly, fine detail, hyper realistic
|
# ? Oct 20, 2022 19:56 |
|
|
# ? May 29, 2024 21:41 |
|
Rahu posted:Stability AI has issued a DMCA takedown against the Runway ML repo with the 1.5 model, grab it while you can. Haha loving knew it lol. Let me know if anyone wants a mirror or something.
|
# ? Oct 20, 2022 20:15 |