Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Adbot
ADBOT LOVES YOU

LifeSunDeath
Jan 4, 2007

still gay rights and smoke weed every day

hell yeah the DA thread is back!

TheWorldsaStage
Sep 10, 2020

LifeSunDeath posted:

hell yeah the DA thread is back!

Don't joke, this thread doesn't need to be shut down by barely disguised horny :(

TheWorldsaStage fucked around with this message at 19:34 on Oct 16, 2022

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

TheWorldsaStage posted:

I'm waiting with bated breath for any extensive leak or release that's not anime related

The big reason anime is so massively ahead of other categories is because there is pre-made training data already.
There are are big websites like dedicated to crowd-sourcing thousands of separate tags to categorize anime art, so someone who wants to train a model can just download those tags to use as training data. Waifu-diffusion was trained on danbooru, and novelAI was probably trained on a similar dataset although I don't know if they publicly said which.

As an example here is the list of tags attached to just one picture on the current front page of safebooru (a danbooru clone that doesn't allow porn):


(Similar models exist for furry porn because it also has similar booru websites.)
As far as I know there is no collection of nerds equally obsessive about assigning tags to real-life photographs of real-life stuff, so if you wanted to train a model on that you would need to hire massive amounts of labor to manually tag all of it, or use AI-generated tags from the CLIP model (which is what the original stable-diffusion did).

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I was looking for a but report and for some reason this story came up about all the recent drama with Stability.ai, reddit, automatic1111 etc. I honestly can't be bothered to read it but in case someone's interested, there are screenshots and everything.
https://analyticsindiamag.com/when-stability-ai-went-rogue-on-reddit-rampage%EF%BF%BC/


One of the first prompts I've done in SD. And, not even kidding, I put this one into a work presentation as a stock image representing retail business. Nobody noticed anything :v:



E: quick test on anime. This is scary good, goddamn. Rip DA artists.


lunar detritus posted:

Yeah, it's in the Checkpoint Merger tab. It generates a new file so it only needs to load that one one model.
Durrr I'm dumb. Thanks. I still can't get used to "checkpoint" meaning "model".

Seems like there's a bug that doesn't populate the model list in the merger tab. I can switch between vanilla and anime in the top left corner, but not in the merger list. Seems to happen if you add the model while the whole thing is already running, restarting it solved it and it's now running.

mobby_6kl fucked around with this message at 19:53 on Oct 16, 2022

TheWorldsaStage
Sep 10, 2020

RPATDO_LAMD posted:

The big reason anime is so massively ahead of other categories is because there is pre-made training data already.
There are are big websites like dedicated to crowd-sourcing thousands of separate tags to categorize anime art, so someone who wants to train a model can just download those tags to use as training data. Waifu-diffusion was trained on danbooru, and novelAI was probably trained on a similar dataset although I don't know if they publicly said which.

As an example here is the list of tags attached to just one picture on the current front page of safebooru (a danbooru clone that doesn't allow porn):


(Similar models exist for furry porn because it also has similar booru websites.)
As far as I know there is no collection of nerds equally obsessive about assigning tags to real-life photographs of real-life stuff, so if you wanted to train a model on that you would need to hire massive amounts of labor to manually tag all of it, or use AI-generated tags from the CLIP model (which is what the original stable-diffusion did).



That makes a lot of since. I do wish more people were ocd about cataloging not anime bit what can you do lol

mobby_6kl
Aug 9, 2009

by Fluffdaddy

TheWorldsaStage posted:

That makes a lot of since. I do wish more people were ocd about cataloging not anime bit what can you do lol
For once, anime made a positive contribution to the world!


If the CLIP model is how SD was trained, it's not surprising it can do so poorly sometimes. The CLIP results I've seen are very high level.

Here's the result for this image someone posted in the PYF AI thread:

"a woman sitting at a table with a large roast turkey in front of her and a side of sides, by Hendrik van Steenwijk I"

It's actually a surprisingly accurate description, but missing a lot of the details from that anime example.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
There are actually a few websites dedicated to exploring the Laion5B dataset that they used
https://rom1504.github.io/clip-retrieval/

You can search for keywords and see which images in the training dataset match. For example if you search "graffiti" you get a ton of captchas. That partially explains why adding stuff like CAPTCHA to the negative prompt can improve image quality.

AARD VARKMAN
May 17, 1993
I'm trying to train textual inversion on a set of 15 images I have of someone, but I keep getting an error on the "create a new embedding" step. My guess is Auto 1111 on Runpod.io is launching with -medvram, but I can't figure out where I would edit that to make it not load that way once so I can train it. relaunch.py has -medvram in it but I'm not sure what script it runs the second the container starts, or how to kill it so I can just run relaunch.py. It's on a 16gb vram card so it should be fine without that. Anyone got any ideas?

feedmyleg
Dec 25, 2004

IShallRiseAgain posted:

Just posting a few more goosebumps covers I generated

Please never stop making these.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

AARD VARKMAN posted:

I'm trying to train textual inversion on a set of 15 images I have of someone, but I keep getting an error on the "create a new embedding" step. My guess is Auto 1111 on Runpod.io is launching with -medvram, but I can't figure out where I would edit that to make it not load that way once so I can train it. relaunch.py has -medvram in it but I'm not sure what script it runs the second the container starts, or how to kill it so I can just run relaunch.py. It's on a 16gb vram card so it should be fine without that. Anyone got any ideas?
That doesn't seem to be the default behavior for Automatic1111, I could train embeddings just fine on an 8gb card when launching with all defaults. I did a quick search and these scripts contain "medvram" in them:

interrogate.py
lowvram.py
sd_models.py
shared.py

but no idea how they are called when starting up

IShallRiseAgain posted:

You can merge the model with stable diffusion at 50% to get less anime looking pictures.
Ok it's still cartoony but pretty wild how much it seemed to have improved the model. Crazy that you can just do a weighted sum of different models and it "just works"

AARD VARKMAN
May 17, 1993

mobby_6kl posted:

That doesn't seem to be the default behavior for Automatic1111, I could train embeddings just fine on an 8gb card when launching with all defaults. I did a quick search and these scripts contain "medvram" in them:

interrogate.py
lowvram.py
sd_models.py
shared.py

The template on RunPod.io auto runs webui.py with some set of arguments which I believe is including medvram but I can't figure out where to modify that execution behavior :argh:

Moongrave
Jun 19, 2004

Finally Living Rent Free

mobby_6kl posted:

That doesn't seem to be the default behavior for Automatic1111, I could train embeddings just fine on an 8gb card when launching with all defaults. I did a quick search and these scripts contain "medvram" in them:

interrogate.py
lowvram.py
sd_models.py
shared.py

but no idea how they are called when starting up

Ok it's still cartoony but pretty wild how much it seemed to have improved the model. Crazy that you can just do a weighted sum of different models and it "just works"



chuck (realistic) in the prompt start, and (anime) in the negatives

this even works in the NAI model to tone down the Animeness

also remember to set clip skip to 2, for everything, it's absurdly better

Moongrave fucked around with this message at 21:30 on Oct 16, 2022

Mercury_Storm
Jun 12, 2003

*chomp chomp chomp*
Trying some of those Goosebumps books myself:



((Goosebumps)) dystopian book in about (((Ted Cruz))) eating raw beef in a supermarket meat section and being attacked by an (evil angry meat cow with red eyes), R.L. Stine



((Goosebumps)) book in about old ((Alex Jones)) eating the Chili of Forgetting and being sued for (one billion dollars) in a court room by a judge, R.L. Stine



BILLILORRON lol

Mercury_Storm fucked around with this message at 02:43 on Oct 17, 2022

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Mercury_Storm posted:

Trying some of those Goosebumps books myself:

Nice, FYI I've been using a model I made specifically for that https://huggingface.co/IShallRiseAgain/Goosebumps/blob/main/GoosebumpsCoverV1.ckpt prompt is "GoosebumpsCover book_cover"

Mumpy Puffinz
Aug 11, 2008
Nap Ghost
from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA

lunar detritus
May 6, 2009


Mumpy Puffinz posted:

from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA

No matter how much someone edits that initial AI output?

Sedgr
Sep 16, 2007

Neat!

:lol:

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

Mumpy Puffinz posted:

from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA

look at this person saying copyright = art what a dork

Tunicate
May 15, 2012

lunar detritus posted:

No matter how much someone edits that initial AI output?

At that point you are creating new art with public domain assets

Arguably the prompt qualifies as the de minimus creative effort needed to be the artist of an ai image; the supreme court has ruled very broadly on that. For instance while typefaces (ie: the shapes of letters and how they connect) are explicitly not copyrightable as a matter of law, fonts can be copyrighted because there is a tiny bit of creative effort in how you make a standard .ttf font output the exact same pictures of letters.

Rutibex
Sep 9, 2001

by Fluffdaddy

Mumpy Puffinz posted:

from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA

yes but if you take a photograph of the AI art you own the copyright on the photograph. copyright hax

Boba Pearl
Dec 27, 2019

by Athanatos
You also can not prove that art was made by AI so the point is moot.

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

Boba Pearl posted:

You also can not prove that art was made by AI so the point is moot.

I've seen many photoshops in my day I can tell

Rutibex
Sep 9, 2001

by Fluffdaddy
basically the supreme court has ruled that machines dont have a soul

.....unless its a corporation i guess. an ephemeral entity made out of words and documents can make arts but not a electric box

Mumpy Puffinz
Aug 11, 2008
Nap Ghost

Rutibex posted:

yes but if you take a photograph of the AI art you own the copyright on the photograph. copyright hax

:hmmyes:

BrainDance
May 8, 2007

Disco all night long!

I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.)

From the prompt "the meaning of life is"

quote:

The meaning of life is the point you arrive at by trying to make sense of all the things around you. When you are trying to make sense of life, what do you notice in the little things? How do things in this life play into your sense of meaning at all? Let’s see how you can find your own unique meaning by trying to make sense of your experience. There is never any meaning when you have no expectations. We all have expectations. We expect something for one reason or another and what we expect has a meaning. No matter what you expect, there are no meaning, even if you expect your life to be a great success. The meaning is in your expectations. Expecting something out of life does not bring any actual meaning to it, for you, the meaning is only a belief. Your belief in something only brings you forward in life, that’s why you don’t think anything you do is real. The meaning is in your expectations. There is a meaning of life when you try to understand the things around you. When you try to understand something or something else around you, such as people, you will notice the meaning. For example, when you are a parent you are always expecting your children to behave and be good to you and do what you have asked them to do. It’s also your belief that your children will do these things that gives you that belief. But, you don’t expect your children to get into school without even passing the school entrance exam, and you don’t expect your children to get married without knowing at least one year of their partner’s education. So, you are always expecting your children to do well and achieve something out of life. There is no expectation given to them or you. But, when you are doing something that doesn’t give any meaning to your life in any way, you understand that the meaning of life is when you try to fill the things you are expecting from life. What we expect is always with us, the meaning of life is that when we really try. The meaning of life is not given to us or to anybody by a place called ‘life’. The meaning of life is by your trying. There is a meaning of life when you are trying to understand your environment.

"SomethingAwful is"

quote:

SomethingAwful is a registered trademark of The Mistletoe Spankers, which owns intellectual property related to its content. The Mistletoe Spankers is not affiliated with, authorized or sponsored by, or endorsed by, The Mistletoe Spankers in any way. All Trademarks, copyrights and other proprietary items used herein are trademarks of their respective owners / companies.This site contains sexually explicit images and material as such is the exclusive property of The Mistletoe Spankers, and is protected by United States and International copyright laws. Permission for reproduction of the content must be obtained from The Mistletoe Spankers. The visual content of this site is furnished by The Mistletoe Spankers and is protected by applicable state laws and may not be reproduced or reused in any form or by any means without The Mistletoe Spankers' express written consent. Any material which appears on this site is presented as is. The Mistletoe Spankers specifically disclaim any and all responsibility for any errors or omissions. While every precaution has been taken in the preparation of this site, neither The Mistletoe Spankers nor its affiliates will be liable for any direct, indirect, consequential, special or exemplary damages resulting from the use of this site or its content."

Tunicate
May 15, 2012

Rutibex posted:

basically the supreme court has ruled that machines dont have a soul

.....unless its a corporation i guess. an ephemeral entity made out of words and documents can make arts but not a electric box

Nah even with corporations there is still a human artist who originates the copyright, the rights just get transferred.

Mumpy Puffinz
Aug 11, 2008
Nap Ghost

BrainDance posted:

I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.)

From the prompt "the meaning of life is"

"SomethingAwful is"

lol I just googled something awful is and the related question was when was something awful popular

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

BrainDance posted:

I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.)

From the prompt "the meaning of life is"

"SomethingAwful is"

I think the issue is that text generation currently uses too much processing power, there are more powerful text models that pretty much nobody can run locally on their machines.

Also, GPT-4 is apparently going to be insanely good.

IShallRiseAgain fucked around with this message at 03:52 on Oct 17, 2022

BoldFace
Feb 28, 2011
On the topic of text generation, I have been thoroughly impressed with https://character.ai/ . Things like Github Copilot for code generation and OpenAI's text completion with GPT-3 are impressive, but at the end of the day, they still feel like tools. The character.ai bots are not like the chatbots from ten years ago. These things feel like they have a personality and they can maintain it through long conversations. It's not just verbal interaction either. You can start a fictional boxing match with them and they will describe physical actions they take and how they grow more tired as the match progresses. You can give character a coin and after some time ask them to give it back to you. There is very little handholding needed, and the bots seem to pick up even subtle verbal clues.

It feels like we're on the cusp of a major paradigm shift. Computers are finally starting to understand natural language on a more sophisticated level than Apple's Siri or Google's search engine. The impressive thing about Dall-E and Stable Diffusion is not how pretty the images are, but how they can transform written language into something else without losing too much of the original meaning. If humans understand language and computers understand language, a completely new channel of communication is formed. It will fundamentally change how we interact with technology in the future.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mumpy Puffinz posted:

from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA

https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise
The AI itself is not allowed to be the sole author of a copyright, that defaults to it's user.

https://arstechnica.com/information-technology/2022/09/artist-receives-first-known-us-copyright-registration-for-generative-ai-art/
You can copyright the pictures you generate.

I'm not going to get into obvious tangles of ownership from something like a work or school computer, but if you are making something on your home computer, it's yours. Completely up to you to decide what to do with it, all rights reserved.
For those of you that need to hear it, yes, you can do just as well or better than anything that has ever been created up until this point in time. The future is going to be wild, you can make it any way you want to.

https://godotengine.org/
https://www.blender.org/
https://www.gimp.org/


"Dinosaur, flying a jet, in space, shooting lasers, explosions everywhere, absolutely amazing."
seed: https://imgur.com/a/ixsiKCg

Mumpy Puffinz
Aug 11, 2008
Nap Ghost

KwegiboHB posted:

https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise
The AI itself is not allowed to be the sole author of a copyright, that defaults to it's user.

https://arstechnica.com/information-technology/2022/09/artist-receives-first-known-us-copyright-registration-for-generative-ai-art/
You can copyright the pictures you generate.

I'm not going to get into obvious tangles of ownership from something like a work or school computer, but if you are making something on your home computer, it's yours. Completely up to you to decide what to do with it, all rights reserved.
For those of you that need to hear it, yes, you can do just as well or better than anything that has ever been created up until this point in time. The future is going to be wild, you can make it any way you want to.

https://godotengine.org/
https://www.blender.org/
https://www.gimp.org/


"Dinosaur, flying a jet, in space, shooting lasers, explosions everywhere, absolutely amazing."
seed: https://imgur.com/a/ixsiKCg

Courts have been consistent in finding that non-human expression is ineligible for copyright protection
that was in big print in the article you posted

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mumpy Puffinz posted:

Courts have been consistent in finding that non-human expression is ineligible for copyright protection
that was in big print in the article you posted

Don't claim the AI is the author, claim you are the author of AI assisted material.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Mumpy Puffinz posted:

from what I understand AI cannot create art. You can create the AI, and get a copyright on that, but anything the AI creates is not allowed to get a copyright. At least in the USA

those poor ais.....

stringless
Dec 28, 2005

keyboard ⌨️​ :clint: cowboy

Oh right, meant to post this in this thread too:

Good news! Automatic1111's webui isn't limited to 75 input token things any more!

I have no idea how it works, but once you go past the original #/75 it goes to #/150 etc.

So yeah, if anyone was feeling terribly constrained by prompt input limits, feel constrained no more, it works at least up to 1200. Here's some results of running the full Navy Seal Copypasta (which counts as 402/450):





(i don't know how it managed to throw "cuck" in that last one, that's not in the original text, but it's probably somewhere in the weird model blend i've put together)

Yuli Ban
Nov 22, 2016

Bot
A good while ago, long before 95% of people here thought it was possible for neural networks to do any of this, I predicted synthetic media was going to allow for "bedroom multimedia franchises."
I planned on showing this off with a personal passion project of mine called the Yabanverse.

Five years later, it's starting to happen





Dreams do come true

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

In another moment Alice was through the glass, and had jumped lightly down into the Looking-glass room. The very first thing she did was to look whether there was a fire in the fireplace, and she was quite pleased to find that there was a real one, blazing away as brightly as the one she had left behind. "So I shall be as warm here as I was in the old room," thought Alice: "warmer, in fact, because there'll be no one here to scold me away from the fire. Oh, what fun it'll be, when they see me through the glass in here, and can't get at me!"
seed: https://imgur.com/a/ODcvJhy

Yuli Ban
Nov 22, 2016

Bot
In fact, I've used Stable Diffusion to create a whole little story arc:
https://imgur.com/gallery/VqikKl4#i4Wmox0

Mordiceius
Nov 10, 2007

If you think calling me names is gonna get a rise out me, think again. I like my life as an idiot!
This poo poo is so loving cool.

Adbot
ADBOT LOVES YOU

mobby_6kl
Aug 9, 2009

by Fluffdaddy

BrainDance posted:

I did finally get GPT-Neo working, mostly. Though, the project seems to be dead and there isn't as much info about new development in text generation AI as image generation. Also transformers is a bitch about wanting to use my GPU (though I got it to, I think.)

From the prompt "the meaning of life is"

"SomethingAwful is"

Which distribution is that? I looking into text before and it seemed like it all required a ton of VRAM so I noped out pretty quickly

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply