|
I'm bored today. "bear ghost riding a car" "Betty White as Emperor Palpatine from Star Wars" "page from a coloring book about Samuel L. Jackson"
|
# ? Oct 4, 2022 16:07 |
|
|
# ? May 31, 2024 10:46 |
|
Does anyone here have any good understanding on the impact of AI art generation when deciding between an Nvidia RTX A6000 versus a high end Mac Studio? My partner generates a lot of AI art in her workflow, and I'm wondering if an upgraded machine would give her more options + save her money (by reducing the time she spends generating stuff for her work). She knows Macs a lot better than Windows, but I'm under the impression that Nvidia is embracing the AI stuff a lot more than Apple, so I've seen some software that either runs better on Nvidia, or is Nvidia exclusive (because Nvidia makes it)
|
# ? Oct 4, 2022 19:21 |
|
I would explore getting an external thunderbolt 3 enclosure and slap whatever GPU you want in that thing. Enclosures are like $400 but might be less of a hassle and ultimately cheaper than buying a brand new mac desktop People poo poo all over external enclosures for gaming because of the bandwidth but really for this you're just packaging up a job and sending it over to the card. A 50MB job is a rounding error on a 40gbps connection Alternatively you can buy a lot of GPU credits on AWS and send that work to the cloud rather than invest in new hardware locally
|
# ? Oct 4, 2022 19:33 |
|
Hadlock posted:I would explore getting an external thunderbolt 3 enclosure and slap whatever GPU you want in that thing. Enclosures are like $400 but might be less of a hassle and ultimately cheaper than buying a brand new mac desktop Once I get a rough idea on the pricing and spec of machine that would work for her, I would probably also do a comparison on when the break even point for cost would be. She's spent the last two years using AI art in a lot of her workflows. It's never the direct product, but it's really handy for inclusion in things like mood boards. Bonus there is you don't have to worry about any issues with the model in use if copyright stuff becomes a problem, because the art isn't being released externally you don't have to scramble if there's ever something like "Disney has a C&D for this AI model + any art generated with it, you have to stop using it for commercial purposes immediately"
|
# ? Oct 4, 2022 19:40 |
|
Chainclaw posted:Once I get a rough idea on the pricing and spec of machine that would work for her, I would probably also do a comparison on when the break even point for cost would be. She's spent the last two years using AI art in a lot of her workflows. It's never the direct product, but it's really handy for inclusion in things like mood boards. Bonus there is you don't have to worry about any issues with the model in use if copyright stuff becomes a problem, because the art isn't being released externally you don't have to scramble if there's ever something like "Disney has a C&D for this AI model + any art generated with it, you have to stop using it for commercial purposes immediately" Yeah, try to find out if she would even need anything like the A6000. Unless she'd be training her own models it seems like any gamer RTX card would get you an image in a few seconds.
|
# ? Oct 4, 2022 19:49 |
|
Hadlock posted:I would explore getting an external thunderbolt 3 enclosure and slap whatever GPU you want in that thing. Enclosures are like $400 but might be less of a hassle and ultimately cheaper than buying a brand new mac desktop I've been thinking an external graphics card would be completely viable for this. Latency? the gently caress I care about that I'm not rendering video.
|
# ? Oct 4, 2022 19:55 |
|
BrainDance posted:Like what? This one has that too. It is the brains of several programmers and people completing picture capatchas. madmatt112 posted:I’m like 80% sure this poster is using a text AI to at least start their posts. I took the bait earlier and immediately regretted it. I'm only me.
|
# ? Oct 4, 2022 19:56 |
|
mobby_6kl posted:Yeah, try to find out if she would even need anything like the A6000. Unless she'd be training her own models it seems like any gamer RTX card would get you an image in a few seconds. I've got a 3090 running Stable Diffusion and that's why I thought an upgrade would be useful. It's capping out the size of images we can generate. Yeah, we can do more iterative processes, generate a 512x512 then use the upscaler, but I wanted to price out something better. We wouldn't be pulling the purchase cord on this for some time, we need to 1) Price out what an upgrade looks like 2) Understand what that would enable, work-flow wise. 3) Look at her current work & workflow and examine how much time it would save her. 4) Compare all that to using cloud compute instead, and then 5) If this would save her money on her work over the course of probably 2 years, jump on it. Otherwise, don't. She's not generating her own models.
|
# ? Oct 4, 2022 20:13 |
|
Chainclaw posted:I've got a 3090 running Stable Diffusion and that's why I thought an upgrade would be useful. It's capping out the size of images we can generate. Yeah, we can do more iterative processes, generate a 512x512 then use the upscaler, but I wanted to price out something better. We wouldn't be pulling the purchase cord on this for some time, we need to 1) Price out what an upgrade looks like 2) Understand what that would enable, work-flow wise. 3) Look at her current work & workflow and examine how much time it would save her. 4) Compare all that to using cloud compute instead, and then 5) If this would save her money on her work over the course of probably 2 years, jump on it. Otherwise, don't. Stable Diffusion larger than 512x512 still uses 512x512 chunks in sampling and tends to produce weird duplicate images. What I mean is if you say "a dog at the edge a swimming pool" you'll likely get a large pool with an island in the middle with a dog and 4-5 dogs starting 513 pixels away from the island dog. You could easily do larger images when it was in Discord, most people switched to using 512x512 very quickly because of these problems. Larger is only recommended for scenes. It does end up being a cool effect for crowds and stuff though. If this issue is resolved since let me know, I haven't even tried to render higher than 512x512 local because of my experience during the early phase. I did make some cool 80s rock posters (that had like 2 bands 1 hanging from the ceiling one on stage) pixaal fucked around with this message at 20:17 on Oct 4, 2022 |
# ? Oct 4, 2022 20:15 |
|
Is there a hard limit of 512px or is that a constant somewhere you can change? A quick look yields only 8 results for "512" in the repo. Probably has a big impact on render time/memory consumption though, maybe exponential
|
# ? Oct 4, 2022 20:22 |
|
Hadlock posted:Is there a hard limit of 512px or is that a constant somewhere you can change? A quick look yields only 8 results for "512" in the repo. Probably has a big impact on render time/memory consumption though, maybe exponential I think it's actually rendered as 32x32 then 4x upscaled twice to 512. Which is why it has to be a multiple of 32 and why smaller looks so lovely.
|
# ? Oct 4, 2022 20:25 |
|
Hadlock posted:Is there a hard limit of 512px or is that a constant somewhere you can change? A quick look yields only 8 results for "512" in the repo. Probably has a big impact on render time/memory consumption though, maybe exponential Its how the model was trained. The high res fix in the automatic1111 repo works okay, effectively adding another upscale step from 512x512 to your final. You will need to tweak params a bit to get consistent outputs though
|
# ? Oct 4, 2022 20:28 |
|
ok that's really saying to me there isn't a huge reason to upgrade from a 3090, then, unless we were training our own model and writing some of this stuff ourselves. I've got too many hobbies to dive deep into how this stuff works under the hood, so I'll pass on that.
|
# ? Oct 4, 2022 20:30 |
|
Chainclaw posted:ok that's really saying to me there isn't a huge reason to upgrade from a 3090, then, unless we were training our own model and writing some of this stuff ourselves. I've got too many hobbies to dive deep into how this stuff works under the hood, so I'll pass on that. We don't know what the future holds but it could be lower requirements and higher resolution. It also could want a better card. It's current like trying to buy a card for Half Life 3.
|
# ? Oct 4, 2022 20:33 |
|
pixaal posted:We don't know what the future holds but it could be lower requirements and higher resolution. It also could want a better card. It's current like trying to buy a card for Half Life 3. I was shopping specifically for workflows right now, my partner's been using a lot of AI generation in her work for the past two years to great success, but she has a lot of complaints with the process.
|
# ? Oct 4, 2022 20:57 |
|
"children's storybook cover about a communist baby seal teaching Das Kapital and Juche philosophy"
|
# ? Oct 4, 2022 21:04 |
|
For those with weak or even average GPUs, using Google Colab can be a viable alternative to start experimenting. It doesn't even cost anything unless Google decides that enough is enough and closes the tap to free GPU resources.
|
# ? Oct 4, 2022 21:08 |
|
pixaal posted:We don't know what the future holds but it could be lower requirements and higher resolution. It also could want a better card. It's current like trying to buy a card for Half Life 3. Yeah I suspect efficiency and memory usage will drop over the next 24 months, maybe by as much by half. Up until 60 days ago there were a couple hundred people globally working on this stuff, half of them total hacks. We will probably see some C and Rust libraries written to more effectively leverage Intel and Nvidia hardware; I think right now most people using the model are using python which is... Not the most efficient programming language for this stuff. Nvidia probably has a lot of tweaking to do in their drivers as well
|
# ? Oct 4, 2022 21:12 |
|
Hadlock posted:Yeah I suspect efficiency and memory usage will drop over the next 24 months, maybe by as much by half. Up until 60 days ago there were a couple hundred people globally working on this stuff, half of them total hacks. We will probably see some C and Rust libraries written to more effectively leverage Intel and Nvidia hardware; I think right now most people using the model are using python which is... Not the most efficient programming language for this stuff. Nvidia probably has a lot of tweaking to do in their drivers as well Would this just be polishing the Studio Driver for AI work, or would this be a third driver package? Am I really contemplating switching drivers and rebooting my computer to switch between gaming and art.
|
# ? Oct 4, 2022 21:15 |
|
FunkyAl posted:I'm only me. This is true of anyone posting AI generated text (and images for that matter). Ultimately there is a human filtering and choosing to put those works out there and that human is responsible for that content.
|
# ? Oct 4, 2022 21:26 |
|
I have no idea honestly. All I know on the Nvidia side is that there's an awful lot of black box magic that happens inside the driver binary blob, plus whatever firmware is on the card itself. If you do a Google search for "quack3.exe" there are some ancient articles from 2001 where reviewers changed the executable name from quake3 to "quack3" and performance went down due to driver optimisations for benchmark performance results reasons Stable diffusion is new enough that it'll probably be Q1 or Q2 before specific optimizations are made internally by Nvidia and released
|
# ? Oct 4, 2022 21:30 |
|
Conan the librarian movie poster
|
# ? Oct 4, 2022 21:57 |
What is best in life? To open libraries. To see people driven to learn before you. And to read the philosophies of their women.
|
|
# ? Oct 4, 2022 22:04 |
|
Taking another shot at replacing voynich manuscript images without touching the text, using inpainting. And with "text" as a negative prompt, but it still likes to draw some. Here's the original and mask Voynich Manuscript illustrations in the style of Dr. Seuss Voynich Manuscript illustrations in the style of Shrek
|
# ? Oct 4, 2022 22:39 |
|
Squatch Ambassador posted:Taking another shot at replacing voynich manuscript images without touching the text, using inpainting. And with "text" as a negative prompt, but it still likes to draw some. Those are neat! Have you tried running them back through img2img without a mask? If you use the same prompts, a low CFG, and a very low de-noising value, it can sometimes clean up uneven and blurred areas caused by inpainting masks.
|
# ? Oct 4, 2022 22:45 |
|
"baby harp seal disguised as a penguin"
|
# ? Oct 5, 2022 01:28 |
|
Loopback experiments 32 loops at Denoising strength .3 same promp/cfg scale/seed .9 Denoising strength change factor https://i.imgur.com/RBaYLha.mp4 Looks like it converges to a neat color burn in effect. 1.1 Denoising strength change factor https://i.imgur.com/Jb3e5Aj.mp4 Slowed down for comparison, each still follows the theme but only loosely. Might be useful for photobashing.
|
# ? Oct 5, 2022 03:04 |
|
I'm running into a weird issue where my text2image prompts are taking significantly longer to generate. It used to be I could generate 2-3 images at a time in about 4 minutes without issue, now its taking up to 6 on average just to process one image with the same settings. Sometimes I'll hear my machine spin up and the processing speed shoots right back up again, but this seems to be happening at random and I really don't know why? Edit: This is on a 1080 by the way. MechanicalTomPetty fucked around with this message at 04:41 on Oct 5, 2022 |
# ? Oct 5, 2022 04:38 |
|
WhiteHowler posted:Those are neat! I just tried that, having the values high enough to have an effect also messes with the text. Thanks for the tip though, it did help with hiding the seams from outpainting. an improbable bird, demon, interdimensional alien Using outpainting and img2img with the same prompt to complete them
|
# ? Oct 5, 2022 05:32 |
|
FFT posted:Is there a command-line thing for negative prompts in Stable Diffusion or do I need to get Automatic1111 for that? You can run custom python scripts in the Automatic1111 UI. Or I guess you could check the source and see how they have done the negative prompt thing.
|
# ? Oct 5, 2022 07:32 |
|
MechanicalTomPetty posted:I'm running into a weird issue where my text2image prompts are taking significantly longer to generate. It used to be I could generate 2-3 images at a time in about 4 minutes without issue, now its taking up to 6 on average just to process one image with the same settings. Sometimes I'll hear my machine spin up and the processing speed shoots right back up again, but this seems to be happening at random and I really don't know why? Are you using your computer for other things (reading the forums, wathcing youtube) while it's generating? That slows it down.
|
# ? Oct 5, 2022 07:37 |
|
MechanicalTomPetty posted:I'm running into a weird issue where my text2image prompts are taking significantly longer to generate. It used to be I could generate 2-3 images at a time in about 4 minutes without issue, now its taking up to 6 on average just to process one image with the same settings. Sometimes I'll hear my machine spin up and the processing speed shoots right back up again, but this seems to be happening at random and I really don't know why? your AI has become self aware and it's stealing compute cycles to calculate your death
|
# ? Oct 5, 2022 09:34 |
|
https://twitter.com/KimJungGiUS/status/1577583009783431169 One of the thread’s favorite artists died suddenly today
|
# ? Oct 5, 2022 16:40 |
|
|
# ? Oct 5, 2022 18:27 |
|
Squatch Ambassador posted:I just tried that, having the values high enough to have an effect also messes with the text. Thanks for the tip though, it did help with hiding the seams from outpainting. I'm not sure about the terminology but this may help - Dreamstudio's built in img2img/infill mask either has degrees of transparency or adjusts the base image transparency, either way it suggests that other implementations may be able to work similarly. When used correctly the mask ends up with an edge gradient instead of just the edge and the end result is seamless. Unfortunately I haven't had the time to spin up a colab to try it such methods with other implementations, but it seems promising for this sort of editing.
|
# ? Oct 5, 2022 22:56 |
|
BrainDance posted:I hope I don't sound like an AI, I thought I sound like a cool dude
|
# ? Oct 5, 2022 23:05 |
|
The Sausages posted:I'm not sure about the terminology but this may help - Dreamstudio's built in img2img/infill mask either has degrees of transparency or adjusts the base image transparency, either way it suggests that other implementations may be able to work similarly. When used correctly the mask ends up with an edge gradient instead of just the edge and the end result is seamless. Unfortunately I haven't had the time to spin up a colab to try it such methods with other implementations, but it seems promising for this sort of editing. The Automatic1111 version has a "mask blur" setting, but I haven't had much luck tweaking it when the inpainting doesn't blend seamlessly. I just end up with a wider zone with blur or mismatched textures. I usually end up running the finished image through img2img with no mask and a very low denoising value. This can clean up edges but sometimes messes with fine details, especially intricate textures.
|
# ? Oct 5, 2022 23:12 |
|
im having trouble with this new dark souls boss please help Mario Genesis
|
# ? Oct 5, 2022 23:42 |
|
it took a lot of photoshop and img2img to get something that is still marginally horrifying.
|
# ? Oct 6, 2022 00:27 |
|
|
# ? May 31, 2024 10:46 |
|
seems like putting _ between related words forces them to be more closely-linked
|
# ? Oct 6, 2022 00:28 |