|
|
# ? Oct 16, 2022 19:13 |
|
|
# ? May 30, 2024 09:17 |
|
hell yeah the DA thread is back!
|
# ? Oct 16, 2022 19:14 |
|
LifeSunDeath posted:hell yeah the DA thread is back! Don't joke, this thread doesn't need to be shut down by barely disguised horny TheWorldsaStage fucked around with this message at 19:34 on Oct 16, 2022 |
# ? Oct 16, 2022 19:22 |
|
TheWorldsaStage posted:I'm waiting with bated breath for any extensive leak or release that's not anime related The big reason anime is so massively ahead of other categories is because there is pre-made training data already. There are are big websites like dedicated to crowd-sourcing thousands of separate tags to categorize anime art, so someone who wants to train a model can just download those tags to use as training data. Waifu-diffusion was trained on danbooru, and novelAI was probably trained on a similar dataset although I don't know if they publicly said which. As an example here is the list of tags attached to just one picture on the current front page of safebooru (a danbooru clone that doesn't allow porn): (Similar models exist for furry porn because it also has similar booru websites.) As far as I know there is no collection of nerds equally obsessive about assigning tags to real-life photographs of real-life stuff, so if you wanted to train a model on that you would need to hire massive amounts of labor to manually tag all of it, or use AI-generated tags from the CLIP model (which is what the original stable-diffusion did).
|
# ? Oct 16, 2022 19:27 |
|
I was looking for a but report and for some reason this story came up about all the recent drama with Stability.ai, reddit, automatic1111 etc. I honestly can't be bothered to read it but in case someone's interested, there are screenshots and everything. https://analyticsindiamag.com/when-stability-ai-went-rogue-on-reddit-rampage%EF%BF%BC/ One of the first prompts I've done in SD. And, not even kidding, I put this one into a work presentation as a stock image representing retail business. Nobody noticed anything E: quick test on anime. This is scary good, goddamn. Rip DA artists. lunar detritus posted:Yeah, it's in the Checkpoint Merger tab. It generates a new file so it only needs to load that one one model. Seems like there's a bug that doesn't populate the model list in the merger tab. I can switch between vanilla and anime in the top left corner, but not in the merger list. Seems to happen if you add the model while the whole thing is already running, restarting it solved it and it's now running. mobby_6kl fucked around with this message at 19:53 on Oct 16, 2022 |
# ? Oct 16, 2022 19:32 |
|
RPATDO_LAMD posted:The big reason anime is so massively ahead of other categories is because there is pre-made training data already. That makes a lot of since. I do wish more people were ocd about cataloging not anime bit what can you do lol
|
# ? Oct 16, 2022 19:33 |
|
TheWorldsaStage posted:That makes a lot of since. I do wish more people were ocd about cataloging not anime bit what can you do lol If the CLIP model is how SD was trained, it's not surprising it can do so poorly sometimes. The CLIP results I've seen are very high level. Here's the result for this image someone posted in the PYF AI thread: "a woman sitting at a table with a large roast turkey in front of her and a side of sides, by Hendrik van Steenwijk I" It's actually a surprisingly accurate description, but missing a lot of the details from that anime example.
|
# ? Oct 16, 2022 19:39 |
|
There are actually a few websites dedicated to exploring the Laion5B dataset that they used https://rom1504.github.io/clip-retrieval/ You can search for keywords and see which images in the training dataset match. For example if you search "graffiti" you get a ton of captchas. That partially explains why adding stuff like CAPTCHA to the negative prompt can improve image quality.
|
# ? Oct 16, 2022 19:46 |
|
I'm trying to train textual inversion on a set of 15 images I have of someone, but I keep getting an error on the "create a new embedding" step. My guess is Auto 1111 on Runpod.io is launching with -medvram, but I can't figure out where I would edit that to make it not load that way once so I can train it. relaunch.py has -medvram in it but I'm not sure what script it runs the second the container starts, or how to kill it so I can just run relaunch.py. It's on a 16gb vram card so it should be fine without that. Anyone got any ideas?
|
# ? Oct 16, 2022 20:01 |
|
IShallRiseAgain posted:Just posting a few more goosebumps covers I generated Please never stop making these.
|
# ? Oct 16, 2022 20:20 |
|
AARD VARKMAN posted:I'm trying to train textual inversion on a set of 15 images I have of someone, but I keep getting an error on the "create a new embedding" step. My guess is Auto 1111 on Runpod.io is launching with -medvram, but I can't figure out where I would edit that to make it not load that way once so I can train it. relaunch.py has -medvram in it but I'm not sure what script it runs the second the container starts, or how to kill it so I can just run relaunch.py. It's on a 16gb vram card so it should be fine without that. Anyone got any ideas? interrogate.py lowvram.py sd_models.py shared.py but no idea how they are called when starting up IShallRiseAgain posted:You can merge the model with stable diffusion at 50% to get less anime looking pictures.
|
# ? Oct 16, 2022 20:25 |
|
mobby_6kl posted:That doesn't seem to be the default behavior for Automatic1111, I could train embeddings just fine on an 8gb card when launching with all defaults. I did a quick search and these scripts contain "medvram" in them: The template on RunPod.io auto runs webui.py with some set of arguments which I believe is including medvram but I can't figure out where to modify that execution behavior
|
# ? Oct 16, 2022 20:43 |
|
mobby_6kl posted:That doesn't seem to be the default behavior for Automatic1111, I could train embeddings just fine on an 8gb card when launching with all defaults. I did a quick search and these scripts contain "medvram" in them: chuck (realistic) in the prompt start, and (anime) in the negatives this even works in the NAI model to tone down the Animeness also remember to set clip skip to 2, for everything, it's absurdly better Moongrave fucked around with this message at 21:30 on Oct 16, 2022 |
# ? Oct 16, 2022 21:20 |
|
Trying some of those Goosebumps books myself: ((Goosebumps)) dystopian book in about (((Ted Cruz))) eating raw beef in a supermarket meat section and being attacked by an (evil angry meat cow with red eyes), R.L. Stine ((Goosebumps)) book in about old ((Alex Jones)) eating the Chili of Forgetting and being sued for (one billion dollars) in a court room by a judge, R.L. Stine BILLILORRON lol Mercury_Storm fucked around with this message at 02:43 on Oct 17, 2022 |
# ? Oct 17, 2022 02:40 |
|
Mercury_Storm posted:Trying some of those Goosebumps books myself: Nice, FYI I've been using a model I made specifically for that https://huggingface.co/IShallRiseAgain/Goosebumps/blob/main/GoosebumpsCoverV1.ckpt prompt is "GoosebumpsCover book_cover"
|
# ? Oct 17, 2022 02:49 |
|
from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA
|
# ? Oct 17, 2022 02:54 |
Mumpy Puffinz posted:from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA No matter how much someone edits that initial AI output?
|
|
# ? Oct 17, 2022 02:56 |
|
|
# ? Oct 17, 2022 02:57 |
|
Mumpy Puffinz posted:from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA look at this person saying copyright = art what a dork
|
# ? Oct 17, 2022 03:04 |
|
lunar detritus posted:No matter how much someone edits that initial AI output? At that point you are creating new art with public domain assets Arguably the prompt qualifies as the de minimus creative effort needed to be the artist of an ai image; the supreme court has ruled very broadly on that. For instance while typefaces (ie: the shapes of letters and how they connect) are explicitly not copyrightable as a matter of law, fonts can be copyrighted because there is a tiny bit of creative effort in how you make a standard .ttf font output the exact same pictures of letters.
|
# ? Oct 17, 2022 03:05 |
|
Mumpy Puffinz posted:from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA yes but if you take a photograph of the AI art you own the copyright on the photograph. copyright hax
|
# ? Oct 17, 2022 03:06 |
|
You also can not prove that art was made by AI so the point is moot.
|
# ? Oct 17, 2022 03:07 |
|
Boba Pearl posted:You also can not prove that art was made by AI so the point is moot. I've seen many photoshops in my day I can tell
|
# ? Oct 17, 2022 03:09 |
|
basically the supreme court has ruled that machines dont have a soul .....unless its a corporation i guess. an ephemeral entity made out of words and documents can make arts but not a electric box
|
# ? Oct 17, 2022 03:09 |
|
Rutibex posted:yes but if you take a photograph of the AI art you own the copyright on the photograph. copyright hax
|
# ? Oct 17, 2022 03:13 |
|
I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.) From the prompt "the meaning of life is" quote:The meaning of life is the point you arrive at by trying to make sense of all the things around you. When you are trying to make sense of life, what do you notice in the little things? How do things in this life play into your sense of meaning at all? Let’s see how you can find your own unique meaning by trying to make sense of your experience. There is never any meaning when you have no expectations. We all have expectations. We expect something for one reason or another and what we expect has a meaning. No matter what you expect, there are no meaning, even if you expect your life to be a great success. The meaning is in your expectations. Expecting something out of life does not bring any actual meaning to it, for you, the meaning is only a belief. Your belief in something only brings you forward in life, that’s why you don’t think anything you do is real. The meaning is in your expectations. There is a meaning of life when you try to understand the things around you. When you try to understand something or something else around you, such as people, you will notice the meaning. For example, when you are a parent you are always expecting your children to behave and be good to you and do what you have asked them to do. It’s also your belief that your children will do these things that gives you that belief. But, you don’t expect your children to get into school without even passing the school entrance exam, and you don’t expect your children to get married without knowing at least one year of their partner’s education. So, you are always expecting your children to do well and achieve something out of life. There is no expectation given to them or you. But, when you are doing something that doesn’t give any meaning to your life in any way, you understand that the meaning of life is when you try to fill the things you are expecting from life. What we expect is always with us, the meaning of life is that when we really try. The meaning of life is not given to us or to anybody by a place called ‘life’. The meaning of life is by your trying. There is a meaning of life when you are trying to understand your environment. "SomethingAwful is" quote:SomethingAwful is a registered trademark of The Mistletoe Spankers, which owns intellectual property related to its content. The Mistletoe Spankers is not affiliated with, authorized or sponsored by, or endorsed by, The Mistletoe Spankers in any way. All Trademarks, copyrights and other proprietary items used herein are trademarks of their respective owners / companies.This site contains sexually explicit images and material as such is the exclusive property of The Mistletoe Spankers, and is protected by United States and International copyright laws. Permission for reproduction of the content must be obtained from The Mistletoe Spankers. The visual content of this site is furnished by The Mistletoe Spankers and is protected by applicable state laws and may not be reproduced or reused in any form or by any means without The Mistletoe Spankers' express written consent. Any material which appears on this site is presented as is. The Mistletoe Spankers specifically disclaim any and all responsibility for any errors or omissions. While every precaution has been taken in the preparation of this site, neither The Mistletoe Spankers nor its affiliates will be liable for any direct, indirect, consequential, special or exemplary damages resulting from the use of this site or its content."
|
# ? Oct 17, 2022 03:27 |
|
Rutibex posted:basically the supreme court has ruled that machines dont have a soul Nah even with corporations there is still a human artist who originates the copyright, the rights just get transferred.
|
# ? Oct 17, 2022 03:44 |
|
BrainDance posted:I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.) lol I just googled something awful is and the related question was when was something awful popular
|
# ? Oct 17, 2022 03:46 |
|
BrainDance posted:I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.) I think the issue is that text generation currently uses too much processing power, there are more powerful text models that pretty much nobody can run locally on their machines. Also, GPT-4 is apparently going to be insanely good. IShallRiseAgain fucked around with this message at 03:52 on Oct 17, 2022 |
# ? Oct 17, 2022 03:47 |
|
On the topic of text generation, I have been thoroughly impressed with https://character.ai/ . Things like Github Copilot for code generation and OpenAI's text completion with GPT-3 are impressive, but at the end of the day, they still feel like tools. The character.ai bots are not like the chatbots from ten years ago. These things feel like they have a personality and they can maintain it through long conversations. It's not just verbal interaction either. You can start a fictional boxing match with them and they will describe physical actions they take and how they grow more tired as the match progresses. You can give character a coin and after some time ask them to give it back to you. There is very little handholding needed, and the bots seem to pick up even subtle verbal clues. It feels like we're on the cusp of a major paradigm shift. Computers are finally starting to understand natural language on a more sophisticated level than Apple's Siri or Google's search engine. The impressive thing about Dall-E and Stable Diffusion is not how pretty the images are, but how they can transform written language into something else without losing too much of the original meaning. If humans understand language and computers understand language, a completely new channel of communication is formed. It will fundamentally change how we interact with technology in the future.
|
# ? Oct 17, 2022 05:01 |
|
Mumpy Puffinz posted:from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise The AI itself is not allowed to be the sole author of a copyright, that defaults to it's user. https://arstechnica.com/information-technology/2022/09/artist-receives-first-known-us-copyright-registration-for-generative-ai-art/ You can copyright the pictures you generate. I'm not going to get into obvious tangles of ownership from something like a work or school computer, but if you are making something on your home computer, it's yours. Completely up to you to decide what to do with it, all rights reserved. For those of you that need to hear it, yes, you can do just as well or better than anything that has ever been created up until this point in time. The future is going to be wild, you can make it any way you want to. https://godotengine.org/ https://www.blender.org/ https://www.gimp.org/ "Dinosaur, flying a jet, in space, shooting lasers, explosions everywhere, absolutely amazing." seed: https://imgur.com/a/ixsiKCg
|
# ? Oct 17, 2022 05:05 |
|
KwegiboHB posted:https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise Courts have been consistent in finding that non-human expression is ineligible for copyright protection that was in big print in the article you posted
|
# ? Oct 17, 2022 05:16 |
|
Mumpy Puffinz posted:Courts have been consistent in finding that non-human expression is ineligible for copyright protection Don't claim the AI is the author, claim you are the author of AI assisted material.
|
# ? Oct 17, 2022 05:18 |
|
Mumpy Puffinz posted:from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA those poor ais.....
|
# ? Oct 17, 2022 05:36 |
Oh right, meant to post this in this thread too: Good news! Automatic1111's webui isn't limited to 75 input token things any more! I have no idea how it works, but once you go past the original #/75 it goes to #/150 etc. So yeah, if anyone was feeling terribly constrained by prompt input limits, feel constrained no more, it works at least up to 1200. Here's some results of running the full Navy Seal Copypasta (which counts as 402/450): (i don't know how it managed to throw "cuck" in that last one, that's not in the original text, but it's probably somewhere in the weird model blend i've put together)
|
|
# ? Oct 17, 2022 05:36 |
|
A good while ago, long before 95% of people here thought it was possible for neural networks to do any of this, I predicted synthetic media was going to allow for "bedroom multimedia franchises." I planned on showing this off with a personal passion project of mine called the Yabanverse. Five years later, it's starting to happen Dreams do come true
|
# ? Oct 17, 2022 05:41 |
|
In another moment Alice was through the glass, and had jumped lightly down into the Looking-glass room. The very first thing she did was to look whether there was a fire in the fireplace, and she was quite pleased to find that there was a real one, blazing away as brightly as the one she had left behind. "So I shall be as warm here as I was in the old room," thought Alice: "warmer, in fact, because there'll be no one here to scold me away from the fire. Oh, what fun it'll be, when they see me through the glass in here, and can't get at me!" seed: https://imgur.com/a/ODcvJhy
|
# ? Oct 17, 2022 05:41 |
|
In fact, I've used Stable Diffusion to create a whole little story arc: https://imgur.com/gallery/VqikKl4#i4Wmox0
|
# ? Oct 17, 2022 05:45 |
|
This poo poo is so loving cool.
|
# ? Oct 17, 2022 07:32 |
|
|
# ? May 30, 2024 09:17 |
|
BrainDance posted:I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.) Which distribution is that? I looking into text before and it seemed like it all required a ton of VRAM so I noped out pretty quickly
|
# ? Oct 17, 2022 07:59 |