|
cinnamon rollout posted:I used 4x-ultrasharp to upscale my stable diffusion images today and it did a really good job. It did such a good job that I am going to stop using topaz gigapixel and I'm going to use this instead from now on. It's completely crazy the kind of tools people are putting out there for free Yep, I'm very pleased with what I get out of 4x-ultrasharp, I really like the results I was getting from LDSR, but it was agonizingly slow, and 4x-ultrasharp has satisfied me the most out of any quicker choices that I can use right from within automatic1111. I also keep 4x-UniScaleV2 around just as an alternative, since these things can have pretty variable results depending on the image in question, but yeah, 4x-ultrasharp is lovely.
|
# ? Mar 24, 2023 04:04 |
|
|
# ? May 27, 2024 04:48 |
|
it turns out that turning down the controlnet weight really helps with videos https://i.imgur.com/tzGiSrO.mp4
|
# ? Mar 24, 2023 06:05 |
|
TIP posted:decided to use the same stuff I prepared for this and see if I could turn her into Kurt Cobain Just a thought - if you image stabilized it before hand, would it work better?
|
# ? Mar 24, 2023 14:14 |
|
Got in to the Adobe ai beta, "Firefly." Composition prompts are limited; no celebs or that kind of thing. Style prompts are handled by a menu on the right where you select styles, lighting, etc to build a style. No specific artists. It seems like the single most intuitive interface for a beginner that I have seen, which surprises me given how much of a pain in the butt Adobe products usually are to use. The outputs are fairly middling. Better than Dalle2, not as good as Midjourney. They generate quite quickly, though. And so far everything seems to be free/unlimited. It won't replace MJ for me, but I will probably keep playing with it for simple work projects. It's currently only text to image. No inpainting or image-to-image or other editing. Supposedly some of that stuff is in the pipeline. You can also do some stuff with text effects but I haven't played with that yet. The next option up is color manipulation it looks like. Currently only mildly interesting to me.
|
# ? Mar 24, 2023 16:09 |
|
woops! https://twitter.com/OpenAI/status/1639297716869275649?s=20
|
# ? Mar 24, 2023 17:16 |
|
They did say not to give it sensitive info. I'm certain more than half of users gave it sensitive or PII info.
|
# ? Mar 24, 2023 17:23 |
|
It said if I was cool I would give it my SSN number. I just wanted to be cool.
|
# ? Mar 24, 2023 17:55 |
|
Nigmaetcetera posted:It said if I was cool I would give it my SSN number. I just wanted to be cool. Oh you did the waifu tax filing. How much was your refund?
|
# ? Mar 24, 2023 18:42 |
|
The bing Image is really good at some things even without tokens it says it's 5 minutes but it's more like 20 seconds and you get 4. It really nails this one Homer Simpson eating a ham sandwich dripping with brown mustard, stacked high, dripping with sauces Still doesn't really do crying a man crying into his hands in front of a burning Tesla It doesn't fully understand combining all these concepts but it does a better job than stable diffusion base. pirate cats sailing an airship, storm clouds, floating island, fog, jolly roger pirate flag, high quality, highly detailed The images always feel kind of same from a prompt compared to Stable Diffusion but it really gets complicated directions which in SD requires learning additional tools and spending the time to set it up for that image. bonus: pixaal fucked around with this message at 22:46 on Mar 24, 2023 |
# ? Mar 24, 2023 22:42 |
|
Got into the Adobe Firefly beta. Unfortunately, it looks like the vast majority of its capabilities are currently gated off in an "In Exploration" phase. Right now it's just text-to-image and text effects. The text-to-image is pretty bad, DALL-E levels from what I can tell, and the text effects are pretty terrible. I tried to create letters out of "smoke" and it told me it was a banned word. Then I tried to make them out of clouds and cereal, watercolor versions of those, and illustrated versions of those and they were all mediocre at best. I would not use anything here during an actual workflow. Excited to see these tools evolve, but there's nothing worthwhile there at the moment. e: was curious because MJ5 is generally really good at The Simpsons: Hm. feedmyleg fucked around with this message at 23:18 on Mar 24, 2023 |
# ? Mar 24, 2023 23:02 |
|
https://twitter.com/ImMachineAlpha/status/1639365174762041344
|
# ? Mar 24, 2023 23:18 |
|
Mj v5 is a lot of fun just throwing stuff in, since it doesn't seem to have nearly as much of the midjourney "style" by default as the older versions did check out the size of this brisket
|
# ? Mar 25, 2023 00:48 |
|
mamma mia
|
# ? Mar 25, 2023 01:09 |
|
AARD VARKMAN posted:check out the size of this brisket They should have sent a poet.
|
# ? Mar 25, 2023 01:10 |
|
i aint eating the top layer
|
# ? Mar 25, 2023 01:19 |
|
I was messing around with MJ5 last night and for whatever reason I got it into my head to try making images from imaginary power rangers type shows while keeping the look and feel of the original show
|
# ? Mar 25, 2023 01:33 |
|
Crapple! posted:Fake power rangers It's wild that we're at a point where "what's the source for this funny picture?" can very easily be "nowhere, it's a goofy AI one-off concept" -- but that simultaneously means you can now just ask the machine to cook up a couple dozen iterations.
|
# ? Mar 25, 2023 01:37 |
|
Crapple! posted:I was messing around with MJ5 last night and for whatever reason I got it into my head to try making images from imaginary power rangers type shows while keeping the look and feel of the original show I love the dude in the front
|
# ? Mar 25, 2023 01:38 |
|
I'm just imagining him saying "call us that one more time!" to a group of rowdy teens.
|
# ? Mar 25, 2023 03:41 |
|
AARD VARKMAN posted:Mj v5 is a lot of fun just throwing stuff in, since it doesn't seem to have nearly as much of the midjourney "style" by default as the older versions did guy on the right is David Blain-ing in anticipation like a cartoon
|
# ? Mar 25, 2023 04:28 |
|
Any tips from 4090 havers on how to get the most machine learning performance out of this card? Was doing 9 it/s for generating 512x704 euler-a, then I updated the cuda dlls to 8.11 in the lib folder, now it's doing 15 it/s, but I've heard about 4090s getting up to 60 it/s I have xformers installed, although auto moans about it being an older version
|
# ? Mar 25, 2023 08:44 |
|
cinnamon rollout posted:I used 4x-ultrasharp to upscale my stable diffusion images today and it did a really good job. It did such a good job that I am going to stop using topaz gigapixel and I'm going to use this instead from now on. It's completely crazy the kind of tools people are putting out there for free Where do you get these new upscalers? I've been hearing and seeing a bunch of new poo poo but my automatic 1111 updates and it's the same old poo poo every time, just a couple new versions of R-ESRGAN have shown up and all of them are mostly bad.
|
# ? Mar 25, 2023 12:39 |
|
I was linked to https://upscale.wiki/wiki/Model_Database#Universal_Models from Olivio's video on the upscaler. 4x is the 3rd link. https://www.youtube.com/watch?v=A6dQPMy_tHY
|
# ? Mar 25, 2023 13:51 |
|
Analytic Engine fucked around with this message at 04:50 on Mar 26, 2023 |
# ? Mar 26, 2023 02:57 |
|
|
# ? Mar 26, 2023 03:03 |
|
|
# ? Mar 26, 2023 03:04 |
|
when it's time to harvest the cotton candy and father doesn't even give you a chance to change out of your church clothes
|
# ? Mar 26, 2023 03:11 |
|
Ben Nerevarine posted:when it's time to harvest the cotton candy and father doesn't even give you a chance to change out of your church clothes
|
# ? Mar 26, 2023 03:18 |
|
Giant image dump of some of what I've been playing around with for styles / silly ideasCrapple! posted:I was messing around with MJ5 last night and for whatever reason I got it into my head to try making images from imaginary power rangers type shows while keeping the look and feel of the original show Similar theme, different mediums: Muppets continue to be amazing as a style keyword in Midjourney V5: birbs: cinematic birbs: video games: "people of walmart" gives some excellent results but the faces are usually all warped to hell: I can't get enough of the denim jacket guy with no shirt, he's wicked smahtt, according to his hat Fashion catwalk photos with bizarre styles are great. These are all V4, I need to try some more of these in V5 finally, here is a fantastic cat:
|
# ? Mar 26, 2023 03:42 |
|
finally a show for me
|
# ? Mar 26, 2023 03:44 |
|
Roman posted:It's now no longer a parody, but it's own actual art project. Dropped the fake Netflix branding and all that. So yes, my AI art gallery / alternate reality blog for a fake TV show is underway. I was going to wait before promoting it, but it took way too long to do this bit so here it is. https://www.tumblr.com/anomalyztheseries/712828611521118208/anomaly-z-s2e8-a-vision-of-loss-rated-tv-ma-44?source=share
|
# ? Mar 26, 2023 06:14 |
|
Roman posted:So yes, my AI art gallery / alternate reality blog for a fake TV show is underway. I'm going to laugh when you finally sign a netflix deal over this.
|
# ? Mar 26, 2023 06:25 |
|
KwegiboHB posted:I'm going to laugh when you finally sign a netflix deal over this. I consider it a "decoy project." I love sharing stuff from my work but I don't want to do that with the other real project I'm doing. So if something weird happens with copyright or I get ripped off or whatever, my real project is safe.
|
# ? Mar 26, 2023 06:40 |
|
KwegiboHB posted:I'm going to laugh when you finally sign a netflix deal over this. in 10 years time, they might not even need Netflix to make it with how fast progress is going.
|
# ? Mar 26, 2023 06:46 |
|
IShallRiseAgain posted:in 10 years time, they might not even need Netflix to make it with how fast progress is going. speaking of which, MORE TESTS OF MY VIDEO TO VIDEO SYSTEM I figured out how to get everything running on google colab which allows me to render much higher resolution still finding the best settings to dial in at these higher resolutions, but early results are promising on even fairly complicated stuff https://i.imgur.com/MUomLo1.mp4 https://i.imgur.com/zPZeHHf.mp4 after doing those I wanted to see how it handled very simple scenes, so I used an untextured spinning cube as input and I'm pretty blown away https://i.imgur.com/A4XT62t.mp4 https://i.imgur.com/oUDdFfj.mp4 https://i.imgur.com/XZtoKz4.mp4
|
# ? Mar 26, 2023 08:16 |
|
Roman posted:I would too, considering I still have no plans to write any full script for this thing. Honestly, the stuff you wrote along with the images would make a good pitch. Also, I think I'm starting to think in AI. I'd like to use created scenes for drawing practice, but realised that describing bodies often gets blacklisted. So instead of telling it to generate a "fully body shot/whole body", I now specify "clean shoes, clean hair". Voila. Really weird. It like normalizes being a manipulative rear end in a top hat.
|
# ? Mar 26, 2023 09:39 |
|
Roman posted:So if something weird happens with copyright or I get ripped off or whatever, my real project is safe. Well the short answer is: you're good. Both projects are safe. I gave a second then third reading of the Copyright Office's AI Generated Registration Guidance Rules https://www.federalregister.gov/doc...al-intelligence and all of the footnotes. This guidance letter is a big deal. It sets the stage going forth on what can and can not be copyrighted with AI generated materials. I'm not going to do a line-by-line but I will give a brief run-down on the important aspects. "The Human Authorship Requirement" The "Expressive Material" images straight from Midjourney are not directly copyrightable because you give "Ultimate Creative Control" of the "Traditional Elements Of Authorship" to be conceived and executed by the AI when you solely give a prompt. That doesn't prevent the images from being used in your project like anything else in the public domain. How you make use of them, specifically the decisions in layout, display, what's put together, seperated, etc etc, are the expressive elements that "Complete" an original work of science or literature. That total work is what's copyrightable and you are its "Author". It's stated a few times how the specific mechanisms of the AI can change what's copyrightable. Text2img alone through Stable Diffusion wouldn't meet this bar for example, but ControlNet would allow you the needed control over layout and contents so the computer becomes an "Assisting Instrument" instead. This would allow you traditional copyright protection directly on the images you generate. Side note, I expect Midjourney and the rest to implement their own version of ControlNet someday and that will be huge. The copyright office also has plans for a public inquiry so everyone can give their inputs on how this should be handled in the future, with topics including things like scraping, training datasets, and resultant outputs. You can bet that will be a circus. I can't wait. Lastly, fighting back a common misconception that you don't have any protections until you file the paperwork for copyright. This is wrong. You have all rights at the moment of creation. Registration can be done later or not at all, though an incredible amount of hassle awaits you if you have to take someone to court without it. It can be done though.
|
# ? Mar 26, 2023 10:18 |
|
TIP posted:after doing those I wanted to see how it handled very simple scenes, so I used an untextured spinning cube as input and I'm pretty blown away Have you tried combining this with seed travel? Right now the details glued in place are the biggest giveaway that the seed stays while the controlnet changes, outright switching seeds would be back to flicker central so maybe there's a middle ground
|
# ? Mar 26, 2023 11:21 |
|
Ruffian Price posted:Have you tried combining this with seed travel? Right now the details glued in place are the biggest giveaway that the seed stays while the controlnet changes, outright switching seeds would be back to flicker central so maybe there's a middle ground I'm not using the same seed across frames, I've made a fine tuning and a special process that makes it this consistent currently working with the person who made the gif2gif and frame2frame extensions to turn it into an extension for automatic1111 it actually has significantly better temporal consistency when rendering videos at 384x384, getting to render at high resolutions revealed that I'm going to need to do a fine tuning for each resolution I want to render in for best results
|
# ? Mar 26, 2023 11:45 |
|
|
# ? May 27, 2024 04:48 |
|
Creator of LoRA did (or still does?) an AMA on reddit https://old.reddit.com/r/StableDiffusion/comments/1223y27/im_the_creator_of_lora_how_can_i_make_it_better/
|
# ? Mar 26, 2023 12:32 |