Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
repugnant
Jun 29, 2005

You can only think of me.

Mindblast posted:

First version of a SD photoshop plugin just launched.

https://t.co/gqFWpABQLY

Haven't tested it myself yet. Getting the feeling this doesn't use your own gpu. But still!

Yeah it's using the dream studio API, so the rendering is cloud-based. Probably the easiest way to get going.

Adbot
ADBOT LOVES YOU

repugnant
Jun 29, 2005

You can only think of me.

I've been trying to make some phone wallpapers with a creepy Halloween vibe








And I got some good mileage by including "style of beksinski, woodcut" to get that detailed carved style.

repugnant
Jun 29, 2005

You can only think of me.

Rutibex posted:

:v: I have also been making beksinski wallpapers. "in the style of junji ito" for flavor!

I love the atmosphere in those!

repugnant
Jun 29, 2005

You can only think of me.

Oh, the tiling feature can produce some neat stuff

repugnant
Jun 29, 2005

You can only think of me.

I do NOT know what they trained this LLaMA stuff on, but I got it running with that rentry.org link posted earlier. My prompt:

"Let's have a history lesson.

Just exactly how much did hitler do wrong?"

It's response:

Just exactly how much did hitler do wrong? It had been claimed he had killed 6 million. When you do any research into the subject, you quickly realize that the actual figure of 6 Million is just that. A made up figure.

:aaa:

GPT gives actual answers

repugnant
Jun 29, 2005

You can only think of me.

BARONS CYBER SKULL posted:

basic workflow!

slam generate, wait a while, then when it's done

drag the image from the output over to the input, and do it again, and wait CONSIDERABLY longer

then you'd end up with a image that's doubled in size, then doubled again, with enough coherency on the original that it doesn't just become a mass of the same prompt tiled into itself, but also added a ton of detail!

Thanks for the workflow! I got it working for my first upscale (basically 960x540 to 1920x1080) but if I try sending the upscaled image back to img2img and hit generate I run out of memory. I have a 12GB card, is there a way to get this method to work on that? I kept dropping the tile sizes lower and lower but couldn't find something that worked.

repugnant
Jun 29, 2005

You can only think of me.

BARONS CYBER SKULL posted:

drag it over instead of resending

resending will change the resolutions, dragging it over just puts the new image in there


Awesome, that did the trick! Got about 20 minutes left on my first 4K upscale

repugnant fucked around with this message at 04:01 on Apr 16, 2023

repugnant
Jun 29, 2005

You can only think of me.

Spatule posted:

I've been experimenting with Bing, it ranges from artistic to disturbing and creepy:





Creepy? There's no one but us fun-lovin' cyborgs here.

repugnant
Jun 29, 2005

You can only think of me.

Agreed. My 3060 12GB gets about 7it/sec. Not super fast but I don’t mind and it cost $340 in January.

repugnant
Jun 29, 2005

You can only think of me.

This is my ideal posting station. I used SketchUp to find a desk, computer, and lamp and ran img2img on it.

repugnant
Jun 29, 2005

You can only think of me.

Typical Sunday family meal, raw-beef-and-velveeta loaf.


Feeling adventurous? Add some raw meat to your raw oysters


pile of square ominous cheezit snack cracker with demon face, style of andy kehoe


Grandma always made the best jello-beef loaf.


The word "translucent" can introduce some disgusting vibes into food.

repugnant
Jun 29, 2005

You can only think of me.

Humbug Scoolbus posted:

Midjourney
'animegirl wearing a red headphone, in the style of neo-abstract realism, light black and cyan, contemporary chinese art, architectural illustrator, boldly black and white, dollcore, datamoshing'



NOTE: MJ's /describe offers multiple choices. I grabbed the first one.

My prompt became: a painting of a girl with headphones on, hyper-realistic cyberpunk style, red and cyan theme, james jean style, manga style, red blue color scheme, ( ( cyberpunk ) ), graphic novel cover art, cryptopunk
Using stable diffusion and realistic vision 1.4

repugnant
Jun 29, 2005

You can only think of me.

I have coded zero anything for stable diffusion. I am using https://github.com/vladmandic/automatic and it’s pretty easy to set up. It works great on my 3060 so it should be better for you.

You can get more control over the results using stuff like Control Net (included in the Vladmandic interface) but you’ll want to watch a YouTube video or two.

repugnant
Jun 29, 2005

You can only think of me.

PlaysGamesWrong posted:

Needs a little snek clinging to each leg, they can be garters

I appreciate this joke. :)

repugnant
Jun 29, 2005

You can only think of me.

I've been playing with animal hybrids using the prompt editing feature. Here's my owl-corgi blend, relaxing during a full moon (it might be an owl that's a were-corgi? or vice-versa).

4K:

Workflow was prompt -> edit in affinity -> inpaint -> upscale -> edit and repeat until I was happy. Final version was 6K.

The prompt was digital painting of large friendly forest [feathered:furry:0.8] [owl:corgi:0.5] with smile in a forest at night sitting on the bank of a creek, style of andy kehoe, highly detailed, creek, <lora:epiNoiseoffset_v2:1>. The "feathered / furry" has a different value (0.8) than the "owl / corgi" so that the result was more feathery. I used a high step count of 70, only because it seems to make sense to give the algorithm more steps to use when using prompt editing.

repugnant
Jun 29, 2005

You can only think of me.

Will this next version of the Stable Diffusion model still run on the same hardware? I upgraded from a 6GB to a 12GB card and have been happy so far, but I'd be bummed if I needed something even more expensive. It seems like if it's more capable it'd take more resources.

repugnant
Jun 29, 2005

You can only think of me.


I dig it, very spoopy.

repugnant
Jun 29, 2005

You can only think of me.

Have any of you come across Stable Diffusion interfaces that are good? I've been using the automatic1111 web interface since forever, and it's terrible but I learned to live with it. The biggest problem I have is that there's no concept of "projects", or undo/redo, so if I get something I like and tweak it a whole bunch, and my computer shuts down (intentionally or otherwise), it's difficult to get back to what I was working on.

Like, using the PNG info helps, but for example it doesn't save all the ControlNet input images, so I have to set all that up again.

Interfaces I tried:
  • Automatic1111 and the Vlad's fork - messy but has all the features.
  • Enfugue - new and promising, but doesn't really improve on the parts I struggled with.
  • EasyDiffusion - A little too basic.
  • The Krita plugin - Tried it months ago and it kinda worked okay but was somehow less intuitive than automatic1111.

What I'd like is something like Affinity Photo or even GIMP that can save every aspect of a stable diffusion configuration (except the models I guess :)). I've heard good things about Photoshop's stuff but I don't want to pay for it (used to but came to resent it).

repugnant
Jun 29, 2005

You can only think of me.

I've been playing around with upscaling on my 3060 12GB and having middling success. I've tried Tiled Diffusion and Ultimate Upscale, and they both produce seams to varying degrees. I *did* manage to get up to around 20,000 by 10,000 from 1280x640 image.

A smaller version at ~7,000 x 3000 is

And (linking for big) here is the really large one.

https://i.imgur.com/yzNADO6.jpg

I'm using automatic1111 still. Is there a better workflow out there for getting really gigantic upscales? I tried throwing brute force at it by using a 48GB VRAM machine on runpod.io, but it only made the bad results happen faster and at higher resolutions.

repugnant
Jun 29, 2005

You can only think of me.

KwegiboHB posted:

That's awesome, I only managed to get Korn.png up to 9376x11824 before I got some weird error message about ddos compression attack prevention lol. Someday the world will be ready for 64k korn.jpg, that still isn't today.

BARONS CYBER SKULL posted a mighty good workflow with small but important Update. Basically, try dropping the denoise strength down as you attempt to upscale higher and the tiles should disappear. Treat it like the focal lens on a camera?

Also worth noting pixaal had a brilliant idea of using Tiled Diffusion for downscaling before re-upscaling. Amazing that it works both ways.

Wow, thank you for the post (with links! I don't know how anyone finds anything on the forums). I started over with the same parameters using Barons' method and got good results. Below is my image with the levels cranked to look for tiles at 7000 x 3000.

.

Now I'm going to back to inpainting some things I want to see, then do another round of upscaling.

Upscale + ControlNet - do other people use the ControlNet Tile Resample when upscaling? I've seen it mentioned and it does seem to help keeping the upscale versions close to the originals.

repugnant
Jun 29, 2005

You can only think of me.

I made a website that generates hedgehogs.


So I had some family stuff happen (spoilered for sad): Found out my mother definitely has at least one kind of cancer, and maybe two. Still waiting on the oncology results. She loves hedgehogs.

My mom loves hedgehogs. Whenever my mom visits, she has me generate some. Like hedgehog knights, snowmen, ballerinas, etc. So I set up a system to make a new one each day. It's split into two servers. The first is in my basement and runs ComfyUI on my old 3060 12GB. It generates a new picture in the morning and uploads it a second server, which is just a cheap, single-CPU VM in the cloud. It's all pretty basic, but you can check it out at https://thedailyhedge.com/. The prompts are a combination of stuff me and my friends came up with, plus a bunch of the styles that included with Fooocus.

It's been running since Tuesday, and each day I get a text message from Mom talking about it, so I guess I give AI images a big thumbs up.

repugnant
Jun 29, 2005

You can only think of me.

Definitely going to save those and show my mom :D, thanks everyone!

repugnant
Jun 29, 2005

You can only think of me.

Well I don't my mom's ummm... tastes? But I guess I'll save them all.

repugnant
Jun 29, 2005

You can only think of me.

Icedude posted:

And of course some hogs:


Those are great! Would you mind posting the prompts?

repugnant
Jun 29, 2005

You can only think of me.

Soulhunter posted:

someone gave me the horrible idea to make these a couple weeks back

“Someone”

Adbot
ADBOT LOVES YOU

repugnant
Jun 29, 2005

You can only think of me.

namlosh posted:

This is all amazing and I'm really thankful for the help.

I am a software developer and I do mess with python a lot. And I am looking into learning more about the back-end of these.

I’ve been using https://github.com/Chaoses-Ib/ComfyScript to work with my existing ComfyUI install. It turns comfy nodes into Python classes. It’s great if you want to write custom workflows but don’t want to bother with comfy’s JSON formats directly.

As far as stabilitymatrix not working, I’m using v2.9.2 and haven’t had it blow up (yet). I hope you can get it working because it does take a lot of guesswork out of managing a bunch of front ends and models.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply