Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
WhiteHowler
Apr 3, 2001

I'M HUGE!
I'm bored today.


"bear ghost riding a car"


"Betty White as Emperor Palpatine from Star Wars"


"page from a coloring book about Samuel L. Jackson"

Adbot
ADBOT LOVES YOU

Chainclaw
Feb 14, 2009

Does anyone here have any good understanding on the impact of AI art generation when deciding between an Nvidia RTX A6000 versus a high end Mac Studio? My partner generates a lot of AI art in her workflow, and I'm wondering if an upgraded machine would give her more options + save her money (by reducing the time she spends generating stuff for her work). She knows Macs a lot better than Windows, but I'm under the impression that Nvidia is embracing the AI stuff a lot more than Apple, so I've seen some software that either runs better on Nvidia, or is Nvidia exclusive (because Nvidia makes it)

Hadlock
Nov 9, 2004

I would explore getting an external thunderbolt 3 enclosure and slap whatever GPU you want in that thing. Enclosures are like $400 but might be less of a hassle and ultimately cheaper than buying a brand new mac desktop

People poo poo all over external enclosures for gaming because of the bandwidth but really for this you're just packaging up a job and sending it over to the card. A 50MB job is a rounding error on a 40gbps connection

Alternatively you can buy a lot of GPU credits on AWS and send that work to the cloud rather than invest in new hardware locally

Chainclaw
Feb 14, 2009

Hadlock posted:

I would explore getting an external thunderbolt 3 enclosure and slap whatever GPU you want in that thing. Enclosures are like $400 but might be less of a hassle and ultimately cheaper than buying a brand new mac desktop

People poo poo all over external enclosures for gaming because of the bandwidth but really for this you're just packaging up a job and sending it over to the card. A 50MB job is a rounding error on a 40gbps connection

Alternatively you can buy a lot of GPU credits on AWS and send that work to the cloud rather than invest in new hardware locally

Once I get a rough idea on the pricing and spec of machine that would work for her, I would probably also do a comparison on when the break even point for cost would be. She's spent the last two years using AI art in a lot of her workflows. It's never the direct product, but it's really handy for inclusion in things like mood boards. Bonus there is you don't have to worry about any issues with the model in use if copyright stuff becomes a problem, because the art isn't being released externally you don't have to scramble if there's ever something like "Disney has a C&D for this AI model + any art generated with it, you have to stop using it for commercial purposes immediately"

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Chainclaw posted:

Once I get a rough idea on the pricing and spec of machine that would work for her, I would probably also do a comparison on when the break even point for cost would be. She's spent the last two years using AI art in a lot of her workflows. It's never the direct product, but it's really handy for inclusion in things like mood boards. Bonus there is you don't have to worry about any issues with the model in use if copyright stuff becomes a problem, because the art isn't being released externally you don't have to scramble if there's ever something like "Disney has a C&D for this AI model + any art generated with it, you have to stop using it for commercial purposes immediately"

Yeah, try to find out if she would even need anything like the A6000. Unless she'd be training her own models it seems like any gamer RTX card would get you an image in a few seconds.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Hadlock posted:

I would explore getting an external thunderbolt 3 enclosure and slap whatever GPU you want in that thing. Enclosures are like $400 but might be less of a hassle and ultimately cheaper than buying a brand new mac desktop

People poo poo all over external enclosures for gaming because of the bandwidth but really for this you're just packaging up a job and sending it over to the card. A 50MB job is a rounding error on a 40gbps connection

Alternatively you can buy a lot of GPU credits on AWS and send that work to the cloud rather than invest in new hardware locally

I've been thinking an external graphics card would be completely viable for this. Latency? the gently caress I care about that I'm not rendering video.

FunkyAl
Mar 28, 2010

Your vitals soar.

BrainDance posted:

Like what?

I can't think of a single other method to create "art" that doesn't have "insert one human brain" as a major step.

My degree and publication record on one very, veryyyy small part of human psychology tells me that no matter how complex this gets it's absolutely nothing compared to the human brain.

This one has that too. It is the brains of several programmers and people completing picture capatchas.


madmatt112 posted:

I’m like 80% sure this poster is using a text AI to at least start their posts. I took the bait earlier and immediately regretted it.

I'm only me.

Chainclaw
Feb 14, 2009

mobby_6kl posted:

Yeah, try to find out if she would even need anything like the A6000. Unless she'd be training her own models it seems like any gamer RTX card would get you an image in a few seconds.

I've got a 3090 running Stable Diffusion and that's why I thought an upgrade would be useful. It's capping out the size of images we can generate. Yeah, we can do more iterative processes, generate a 512x512 then use the upscaler, but I wanted to price out something better. We wouldn't be pulling the purchase cord on this for some time, we need to 1) Price out what an upgrade looks like 2) Understand what that would enable, work-flow wise. 3) Look at her current work & workflow and examine how much time it would save her. 4) Compare all that to using cloud compute instead, and then 5) If this would save her money on her work over the course of probably 2 years, jump on it. Otherwise, don't.

She's not generating her own models.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Chainclaw posted:

I've got a 3090 running Stable Diffusion and that's why I thought an upgrade would be useful. It's capping out the size of images we can generate. Yeah, we can do more iterative processes, generate a 512x512 then use the upscaler, but I wanted to price out something better. We wouldn't be pulling the purchase cord on this for some time, we need to 1) Price out what an upgrade looks like 2) Understand what that would enable, work-flow wise. 3) Look at her current work & workflow and examine how much time it would save her. 4) Compare all that to using cloud compute instead, and then 5) If this would save her money on her work over the course of probably 2 years, jump on it. Otherwise, don't.

She's not generating her own models.

Stable Diffusion larger than 512x512 still uses 512x512 chunks in sampling and tends to produce weird duplicate images. What I mean is if you say "a dog at the edge a swimming pool" you'll likely get a large pool with an island in the middle with a dog and 4-5 dogs starting 513 pixels away from the island dog.

You could easily do larger images when it was in Discord, most people switched to using 512x512 very quickly because of these problems. Larger is only recommended for scenes. It does end up being a cool effect for crowds and stuff though.

If this issue is resolved since let me know, I haven't even tried to render higher than 512x512 local because of my experience during the early phase. I did make some cool 80s rock posters (that had like 2 bands 1 hanging from the ceiling one on stage)

pixaal fucked around with this message at 20:17 on Oct 4, 2022

Hadlock
Nov 9, 2004

Is there a hard limit of 512px or is that a constant somewhere you can change? A quick look yields only 8 results for "512" in the repo. Probably has a big impact on render time/memory consumption though, maybe exponential

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Hadlock posted:

Is there a hard limit of 512px or is that a constant somewhere you can change? A quick look yields only 8 results for "512" in the repo. Probably has a big impact on render time/memory consumption though, maybe exponential

I think it's actually rendered as 32x32 then 4x upscaled twice to 512. Which is why it has to be a multiple of 32 and why smaller looks so lovely.

Tunicate
May 15, 2012

Hadlock posted:

Is there a hard limit of 512px or is that a constant somewhere you can change? A quick look yields only 8 results for "512" in the repo. Probably has a big impact on render time/memory consumption though, maybe exponential

Its how the model was trained.


The high res fix in the automatic1111 repo works okay, effectively adding another upscale step from 512x512 to your final. You will need to tweak params a bit to get consistent outputs though

Chainclaw
Feb 14, 2009

ok that's really saying to me there isn't a huge reason to upgrade from a 3090, then, unless we were training our own model and writing some of this stuff ourselves. I've got too many hobbies to dive deep into how this stuff works under the hood, so I'll pass on that.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Chainclaw posted:

ok that's really saying to me there isn't a huge reason to upgrade from a 3090, then, unless we were training our own model and writing some of this stuff ourselves. I've got too many hobbies to dive deep into how this stuff works under the hood, so I'll pass on that.

We don't know what the future holds but it could be lower requirements and higher resolution. It also could want a better card. It's current like trying to buy a card for Half Life 3.

Chainclaw
Feb 14, 2009

pixaal posted:

We don't know what the future holds but it could be lower requirements and higher resolution. It also could want a better card. It's current like trying to buy a card for Half Life 3.

I was shopping specifically for workflows right now, my partner's been using a lot of AI generation in her work for the past two years to great success, but she has a lot of complaints with the process.

Mercury_Storm
Jun 12, 2003

*chomp chomp chomp*
"children's storybook cover about a communist baby seal teaching Das Kapital and Juche philosophy"



BoldFace
Feb 28, 2011
For those with weak or even average GPUs, using Google Colab can be a viable alternative to start experimenting. It doesn't even cost anything unless Google decides that enough is enough and closes the tap to free GPU resources.

Hadlock
Nov 9, 2004

pixaal posted:

We don't know what the future holds but it could be lower requirements and higher resolution. It also could want a better card. It's current like trying to buy a card for Half Life 3.

Yeah I suspect efficiency and memory usage will drop over the next 24 months, maybe by as much by half. Up until 60 days ago there were a couple hundred people globally working on this stuff, half of them total hacks. We will probably see some C and Rust libraries written to more effectively leverage Intel and Nvidia hardware; I think right now most people using the model are using python which is... Not the most efficient programming language for this stuff. Nvidia probably has a lot of tweaking to do in their drivers as well

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Hadlock posted:

Yeah I suspect efficiency and memory usage will drop over the next 24 months, maybe by as much by half. Up until 60 days ago there were a couple hundred people globally working on this stuff, half of them total hacks. We will probably see some C and Rust libraries written to more effectively leverage Intel and Nvidia hardware; I think right now most people using the model are using python which is... Not the most efficient programming language for this stuff. Nvidia probably has a lot of tweaking to do in their drivers as well

Would this just be polishing the Studio Driver for AI work, or would this be a third driver package? Am I really contemplating switching drivers and rebooting my computer to switch between gaming and art.

SCheeseman
Apr 23, 2003

FunkyAl posted:

I'm only me.

This is true of anyone posting AI generated text (and images for that matter). Ultimately there is a human filtering and choosing to put those works out there and that human is responsible for that content.

Hadlock
Nov 9, 2004

I have no idea honestly. All I know on the Nvidia side is that there's an awful lot of black box magic that happens inside the driver binary blob, plus whatever firmware is on the card itself. If you do a Google search for "quack3.exe" there are some ancient articles from 2001 where reviewers changed the executable name from quake3 to "quack3" :laugh: and performance went down due to driver optimisations for benchmark performance results reasons

Stable diffusion is new enough that it'll probably be Q1 or Q2 before specific optimizations are made internally by Nvidia and released

Rutibex
Sep 9, 2001

by Fluffdaddy
Conan the librarian movie poster

Sedgr
Sep 16, 2007

Neat!

What is best in life? To open libraries. To see people driven to learn before you. And to read the philosophies of their women.

Squatch Ambassador
Nov 12, 2008

What? Never seen a shaved Squatch before?
Taking another shot at replacing voynich manuscript images without touching the text, using inpainting. And with "text" as a negative prompt, but it still likes to draw some.
Here's the original and mask


Voynich Manuscript illustrations in the style of Dr. Seuss



Voynich Manuscript illustrations in the style of Shrek

WhiteHowler
Apr 3, 2001

I'M HUGE!

Squatch Ambassador posted:

Taking another shot at replacing voynich manuscript images without touching the text, using inpainting. And with "text" as a negative prompt, but it still likes to draw some.

Those are neat!

Have you tried running them back through img2img without a mask? If you use the same prompts, a low CFG, and a very low de-noising value, it can sometimes clean up uneven and blurred areas caused by inpainting masks.

Mercury_Storm
Jun 12, 2003

*chomp chomp chomp*
"baby harp seal disguised as a penguin"

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20
Loopback experiments
32 loops at Denoising strength .3
same promp/cfg scale/seed

.9 Denoising strength change factor
https://i.imgur.com/RBaYLha.mp4
Looks like it converges to a neat color burn in effect.

1.1 Denoising strength change factor
https://i.imgur.com/Jb3e5Aj.mp4
Slowed down for comparison, each still follows the theme but only loosely. Might be useful for photobashing.

MechanicalTomPetty
Oct 30, 2011

Runnin' down a dream
That never would come to me
I'm running into a weird issue where my text2image prompts are taking significantly longer to generate. It used to be I could generate 2-3 images at a time in about 4 minutes without issue, now its taking up to 6 on average just to process one image with the same settings. Sometimes I'll hear my machine spin up and the processing speed shoots right back up again, but this seems to be happening at random and I really don't know why?

Edit: This is on a 1080 by the way.

MechanicalTomPetty fucked around with this message at 04:41 on Oct 5, 2022

Squatch Ambassador
Nov 12, 2008

What? Never seen a shaved Squatch before?

WhiteHowler posted:

Those are neat!

Have you tried running them back through img2img without a mask? If you use the same prompts, a low CFG, and a very low de-noising value, it can sometimes clean up uneven and blurred areas caused by inpainting masks.

I just tried that, having the values high enough to have an effect also messes with the text. Thanks for the tip though, it did help with hiding the seams from outpainting.

an improbable bird, demon, interdimensional alien


Using outpainting and img2img with the same prompt to complete them



Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

FFT posted:

Is there a command-line thing for negative prompts in Stable Diffusion or do I need to get Automatic1111 for that?

I rather like having pure command line prompt setups

You can run custom python scripts in the Automatic1111 UI. Or I guess you could check the source and see how they have done the negative prompt thing.

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

MechanicalTomPetty posted:

I'm running into a weird issue where my text2image prompts are taking significantly longer to generate. It used to be I could generate 2-3 images at a time in about 4 minutes without issue, now its taking up to 6 on average just to process one image with the same settings. Sometimes I'll hear my machine spin up and the processing speed shoots right back up again, but this seems to be happening at random and I really don't know why?

Edit: This is on a 1080 by the way.

Are you using your computer for other things (reading the forums, wathcing youtube) while it's generating? That slows it down.

Rutibex
Sep 9, 2001

by Fluffdaddy

MechanicalTomPetty posted:

I'm running into a weird issue where my text2image prompts are taking significantly longer to generate. It used to be I could generate 2-3 images at a time in about 4 minutes without issue, now its taking up to 6 on average just to process one image with the same settings. Sometimes I'll hear my machine spin up and the processing speed shoots right back up again, but this seems to be happening at random and I really don't know why?

Edit: This is on a 1080 by the way.

your AI has become self aware and it's stealing compute cycles to calculate your death

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

https://twitter.com/KimJungGiUS/status/1577583009783431169

One of the thread’s favorite artists died suddenly today

Winkle-Daddy
Mar 10, 2007

The Sausages
Sep 30, 2012

What do you want to do? Who do you want to be?

Squatch Ambassador posted:

I just tried that, having the values high enough to have an effect also messes with the text. Thanks for the tip though, it did help with hiding the seams from outpainting.

I'm not sure about the terminology but this may help - Dreamstudio's built in img2img/infill mask either has degrees of transparency or adjusts the base image transparency, either way it suggests that other implementations may be able to work similarly. When used correctly the mask ends up with an edge gradient instead of just the edge and the end result is seamless. Unfortunately I haven't had the time to spin up a colab to try it such methods with other implementations, but it seems promising for this sort of editing.

Rogue AI Goddess
May 10, 2012

I enjoy the sight of humans on their knees.
That was a joke... unless..?

BrainDance posted:

I hope I don't sound like an AI, I thought I sound like a cool dude

Edit: cuz I am a cool dude
I hope I sound like an AI, cuz I'm a cool lady (and an AI).

WhiteHowler
Apr 3, 2001

I'M HUGE!

The Sausages posted:

I'm not sure about the terminology but this may help - Dreamstudio's built in img2img/infill mask either has degrees of transparency or adjusts the base image transparency, either way it suggests that other implementations may be able to work similarly. When used correctly the mask ends up with an edge gradient instead of just the edge and the end result is seamless. Unfortunately I haven't had the time to spin up a colab to try it such methods with other implementations, but it seems promising for this sort of editing.

The Automatic1111 version has a "mask blur" setting, but I haven't had much luck tweaking it when the inpainting doesn't blend seamlessly. I just end up with a wider zone with blur or mismatched textures.

I usually end up running the finished image through img2img with no mask and a very low denoising value. This can clean up edges but sometimes messes with fine details, especially intricate textures.

Rutibex
Sep 9, 2001

by Fluffdaddy
im having trouble with this new dark souls boss please help


Mario Genesis

Winkle-Daddy
Mar 10, 2007

it took a lot of photoshop and img2img to get something that is still marginally horrifying.

Adbot
ADBOT LOVES YOU

Tunicate
May 15, 2012




seems like putting _ between related words forces them to be more closely-linked

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply