|
ymgve posted:I mean, I've been impressed by some of the autogenerated lyrics it created when I asked for songs, and haven't tried the new version, so was not sure if you gave it a prompt or the actual lyrics. Oh, yeah I wrote the stand up. You could probably prompt outside of Suno if you wanted for a routine, but within Suno it's always going to structure as a song. Use Suno long enough and the auto generated lyrics start to stand out. There are a bunch of cliches it uses in almost everything. It especially loves NEON, so much that it's become a joke on the discord whenever someone posts something with neon in it at all because it's a tell.
|
# ? Mar 25, 2024 19:39 |
|
|
# ? Apr 28, 2024 21:40 |
|
namlosh posted:Awesome, lots of keywords for me to google. I’m sure that I’m probably using a UI that’s old… it’s command line lol. It just looked like the official one so I started there. I am still a huge proponent of InvokeAI - https://github.com/invoke-ai/InvokeAI - if you want to give that a twirl. Have been using it since October '22, it's pretty neat, all in all. Fairly easy install and comprehensive help/troubleshooting pages for when something goes wrong. They have some tutorials on their YT channel to help you familiarize yourself with it.
|
# ? Mar 25, 2024 20:38 |
|
Sab669 fucked around with this message at 20:42 on Mar 25, 2024 |
# ? Mar 25, 2024 20:39 |
|
some of my latest finds in the image mines
|
# ? Mar 25, 2024 22:39 |
|
I proffer a challenge. A cat's body with a snake's head. I can get a snake's body with a cat's head but not the other way around. Anyone willing to try? I've tried 3 models and no luck.
|
# ? Mar 26, 2024 00:48 |
|
Succeeded in midjourney by prompting for a snake, then selecting everything but the head and prompting for a cat/kitten, removing all references to a snake. prompting for an Egyptian sphinx --> cobra head --> cat body seems to lean more toward making a lizard also made some of the inverse along the way trying other methods in bing/dalle and midjourney Soulhunter fucked around with this message at 01:55 on Mar 26, 2024 |
# ? Mar 26, 2024 01:51 |
|
What you said gave me an idea. I used the vary region tool in midjourney. I generated a cat and then varied a large region around its head. It really does not want to generate a snake's head on the body. I had to use the prompt "A snake with a cat's body sitting on a table. snake head. snake head " on the vary region prompt to attempt to impress the snake head term into the generation. just one snake head did not seem to work as well.
|
# ? Mar 26, 2024 03:32 |
|
namlosh posted:Awesome, lots of keywords for me to google. I’m sure that I’m probably using a UI that’s old… it’s command line lol. It just looked like the official one so I started there. As far as I know you've managed to install the OG version of Stable Diffusion, I'm kinda impressed that you went there of all places. If you went that way because you want to learn more about how it works I'd recommend this old version as well, it explains what the software is doing. otoh if you just want to make pics try Stability Matrix, it's an installer/manager for SD versions and models/loras/etc. It lets you browse and search civitai as well, no CLI/python/git needed. It also has a decent built in GUI that runs on a comfyUI backend. As far as "vetted" models if you stick to the most popular ones on civitai.com they tend to be good. Imo sdxl is consistently better at higher resolutions but with upscaling such as a1111's hires fix it's less crucial. You can also filter checkpoints by "trained models" which helps for originality a lot of merged models are kind of an inbred mess. Definitely recommend browsing civitai with the nsfw filters on if you aren't desensitized to internet filth - heck I feel pretty desensitized but I still leave the filters on when seeing what's new. Another tip for beginners is to find pics you like on civitai and copy the generation parameters. Prompts aren't so important as having a sampler/steps/cfg combo that works consistently for the art style you want.
|
# ? Mar 26, 2024 11:00 |
|
|
# ? Mar 26, 2024 20:48 |
|
namlosh posted:Whats a good way to get started? I quite like https://github.com/LykosAI/StabilityMatrix for new users as it lets you one click install any of the popular front ends, automatically consolidates the models between them, and has hooks to browse and download models right from the interface. e: fb, what I get for leaving tabs open
|
# ? Mar 26, 2024 21:00 |
|
Sedgr posted:https://civitai.com/ should be helpful for some models and such. The Sausages posted:As far as I know you've managed to install the OG version of Stable Diffusion, I'm kinda impressed that you went there of all places. If you went that way because you want to learn more about how it works I'd recommend this old version as well, it explains what the software is doing. Objective Action posted:I quite like https://github.com/LykosAI/StabilityMatrix for new users as it lets you one click install any of the popular front ends, automatically consolidates the models between them, and has hooks to browse and download models right from the interface. mcbexx posted:I am still a huge proponent of InvokeAI - https://github.com/invoke-ai/InvokeAI - if you want to give that a twirl. Have been using it since October '22, it's pretty neat, all in all. This is all amazing and I'm really thankful for the help. I am a software developer and I do mess with python a lot. And I am looking into learning more about the back-end of these. For example, I'm also messing around with Ollama for gpt type stuff. I've got the cli stable-diffusion1.0 working along with invoke-ai now thanks to this thread. I was planning to try a bunch of different options out at first... AUTOMATIC1111, ComfyUI, etc... but then I realized I'd have to re-download all of the models... But it sounds like StabilityMatrix would be able to help with that problem? Maybe I'll install this next. In the meantime I've also been able to get SDXL working... let's see if I have some of the terms right: Model: stable-diffusion-xl-base-1-0 Checkpoints: realvisxlV40_v40LightningBakedvae Refiner (or is this just another checkpoint?): sd_xl_refiner_1.0 ControlNet: A way to add a picture to control the generation of other pictures? (Haven't used one successfully yet) IP Adapter: same/similar to the above? (Haven't used one successfully yet) LoRA; zero clue... but I'll figure it out Actually, I don't want to waste everyone's time answering my questions that I could probably easily look up. I just wanted to jot down some stuff and give a progress report. I wouldn't even be here at all if the thread hadn't helped out. thanks all! ps: prompts(both +/-) are hard, lol. but dang this is fun. I kind of feel like I want to grab sections of literature and feed it in as prompts and just let it run through a book/newspaper overnight lol e: well dang... StabilityMatrix won't install any of the UIs correctly. It has an unhandled exception that blows up the entire app before I can even copy the stacktrace to fill out a proper bug report. I guess I'll just wait for another release then. Tried this one: https://github.com/LykosAI/StabilityMatrix/releases/tag/v2.9.2 namlosh fucked around with this message at 02:31 on Mar 27, 2024 |
# ? Mar 27, 2024 01:15 |
|
Hmm, it works fine for me on windows but iirc I had that problem when I tried it with EndeavourOS which I'm totally new to. Idk enough to troubleshoot and couldn't find anything related searching the issue so I gave up and went back to windows. No problem installing individual packages but I haven't used SD under linux apart from trying out some of the experimental AI video stuff like SEINE that uses python packages lacking windows support.
|
# ? Mar 27, 2024 04:38 |
|
I love this. If it had giant motorcycle wheels on the shoulders it'd be the cool toy I had as a kid where the motorcycle became body armor like a motorized iron man.
|
# ? Mar 27, 2024 05:51 |
|
https://x.com/abcdentminded/status/1772168953978098053?s=20 lmao
|
# ? Mar 27, 2024 13:07 |
|
A big old 'gently caress you' to whoever trains Bing/Dall-E: I was trying to get a picture of a woman of, y'know, average build rather than the rail-thin models AI image generators tend to default to when you ask for "a woman". The problem with Bing is that most of the descriptors you'd use to do that get eggdogged, but I eventually managed to get "slightly overweight" to work in a prompt. This is Bing's idea of a "slightly overweight" woman - ie, perfectly normal and healthy.
|
# ? Mar 27, 2024 13:39 |
|
Small Strange Bird posted:This is Bing's idea of a "slightly overweight" woman - ie, perfectly normal and healthy. I just tried "A slightly overweight woman in a cute city square" and I got 1 comically off result, and one that made me go "I wouldn't call her overweight, but like me she could probably spare 10-20 pounds and be fine"
|
# ? Mar 27, 2024 13:44 |
|
|
# ? Mar 27, 2024 16:21 |
|
namlosh posted:This is all amazing and I'm really thankful for the help. I’ve been using https://github.com/Chaoses-Ib/ComfyScript to work with my existing ComfyUI install. It turns comfy nodes into Python classes. It’s great if you want to write custom workflows but don’t want to bother with comfy’s JSON formats directly. As far as stabilitymatrix not working, I’m using v2.9.2 and haven’t had it blow up (yet). I hope you can get it working because it does take a lot of guesswork out of managing a bunch of front ends and models.
|
# ? Mar 27, 2024 16:30 |
|
Ya'll need to timg your poo poo
|
# ? Mar 27, 2024 17:48 |
|
These two instantly reminded me of a video a german punk band released in December 2022. Early adopters for sure. https://www.youtube.com/watch?v=-RQFHqnIfTc
|
# ? Mar 27, 2024 20:10 |
|
Small Strange Bird posted:A big old 'gently caress you' to whoever trains Bing/Dall-E: I was trying to get a picture of a woman of, y'know, average build rather than the rail-thin models AI image generators tend to default to when you ask for "a woman". The problem with Bing is that most of the descriptors you'd use to do that get eggdogged, but I eventually managed to get "slightly overweight" to work in a prompt. I'm not trying to be funny but, have you tried experimenting with different nationalities in the prompts? This is {tech bro trained matrixes using the last 20 years of the internet for material} country after all
|
# ? Mar 27, 2024 23:18 |
|
It has been a bountiful day. snorch fucked around with this message at 02:52 on Mar 29, 2024 |
# ? Mar 29, 2024 02:46 |
|
"My child, 'ware the woods at night. For then emerge the Things that bite, That snatch, and clutch, and steal away. To ne'er again see light of day!"
|
# ? Mar 29, 2024 07:59 |
|
|
# ? Mar 29, 2024 13:18 |
|
|
# ? Mar 30, 2024 20:27 |
|
The new Suno update is so good https://app.suno.ai/song/cb1b7f58-cdef-4bc9-b874-8375e2c9816d edit: https://app.suno.ai/song/36db2fa2-4778-4540-80c1-a85608478f54 smoobles fucked around with this message at 05:04 on Mar 31, 2024 |
# ? Mar 31, 2024 04:58 |
|
these are great
|
# ? Mar 31, 2024 07:13 |
|
Swagman posted:images What did you do these in?
|
# ? Mar 31, 2024 08:07 |
|
various flavors of lilith brought to you by: newrealityxl checkpoint through stability matrix/comfyui install happy transgender day of visibility!
|
# ? Mar 31, 2024 16:47 |
|
I am totally lost with samplers and schedulers in stability matrix. What is the difference? Why choose one or the other?
|
# ? Apr 2, 2024 01:39 |
|
Elendil004 posted:I am totally lost with samplers and schedulers in stability matrix. What is the difference? Why choose one or the other? The sampler is what actually generates the data (images and text), the scheduler is what determines when to denoise at certain steps and how much. Try a prompt, fix the seed and gently caress around with the samplers and schedulers, you'll see different stuff. One scheduler I do know about is the ancestral one, which never allows it to converge, it always adds noise first. I've been dicking with SD for the last few days and it's a lot to learn compared to DallE or Midjourney. It is fun though. You have so much power with X/Y/Z Plots and other functions that let you just try different things.
|
# ? Apr 2, 2024 01:55 |
|
How do I get it to use a Lora? I seem to only be able to select the checkpoints? edit: if there's a better thread for newb advice point me in that direction, sorry! Elendil004 fucked around with this message at 02:05 on Apr 2, 2024 |
# ? Apr 2, 2024 01:59 |
|
Holy poo poo I didn’t know we needed six-legged rodents, but we absolutely do.
|
# ? Apr 2, 2024 01:59 |
|
Elendil004 posted:How do I get it to use a Lora? I seem to only be able to select the checkpoints? If you have the Lora installed in the lora directory, all you need is to click the Lora heading on the Generation tab and go into the Lora selector (you might need to hit refresh). That will add something like <lora:Winter Waterfall:1> the final number is the weight of the lora. There may be trigger words for particular Lora's, I would look in the documentation on Civitai. Are you using the web interface?
|
# ? Apr 2, 2024 02:06 |
|
Tarkus posted:If you have the Lora installed in the lora directory, all you need is to click the Lora heading on the Generation tab and go into the Lora selector (you might need to hit refresh). That will add something like <lora:Winter Waterfall:1> the final number is the weight of the lora. There may be trigger words for particular Lora's, I would look in the documentation on Civitai. I am using the desktop and I dont have any of those options as far as I can tell
|
# ? Apr 2, 2024 02:09 |
|
You might want to try using Automatic1111. You can use it through the second button down of the left, theere's a button called launch and it'll open a browser window with a locally accessed web server with the interface.
|
# ? Apr 2, 2024 02:16 |
|
I'll take a look at Automatic1111 but right now I can't even get anything with a face, not sure where I'm going wrong compared to the masterworks in this thread. The general idea is there, federal agents around a nonhuman chalk outline. But the faces are AI 1.0 garbage.
|
# ? Apr 2, 2024 02:21 |
|
If you're using an SDXL model you need to make the image larger than 512x512. 1024x1024 is a good place to start. You need to specify certain things like camera angle and stuff depending on the checkpoint. What checkpoint are you using? E: go onto civitai and look at the page for the model you're using. many of the user submitted images have prompts, sampler, step and other data for making those images. Sometimes, to fix eyes and sometimes hands you need to run the HirResfix. Tarkus fucked around with this message at 02:34 on Apr 2, 2024 |
# ? Apr 2, 2024 02:31 |
|
|
# ? Apr 2, 2024 04:44 |
|
|
# ? Apr 28, 2024 21:40 |
|
Elendil004 posted:I am using the desktop and I dont have any of those options as far as I can tell code:
see below example: It's not a bad interface but it seems to have no documentation. For comparison A1111's feature showcase page is also a great crash course on how to actually use it.
|
# ? Apr 2, 2024 16:14 |