|
KitConstantine posted:And I was describing the decision by the copyright office the article was talking about, not the other specific case you are discussing That's still the Thaler case, he presented to the courts, and to the USCO not as any attempt by him to copyright, patent, anything the picture himself, as created by Thaler (because he was really doing this to make a statement and push the courts.) He submitted it with the statement "ownership of the machine" with the author identified as "Creativity Machine" (the AI) to the USCO. That it “was autonomously created by a computer algorithm running on a machine.” And specifically, the USCO said "Because Thaler has not raised this (contribution from a human author) as a basis for registration, the Board does not need to determine under what circumstances human involvement in the creation of machine-generated works would meet the statutory criteria for copyright protection." He presented it to them as completely autonomous, no human involvement at all. And in the USCOs opinion they address it from this perspective, that "a monkey cannot register a copyright in photos it captures with a camera because the Copyright Act refers to an author’s “children,” “widow,” “grandchildren,” and “widower,””" Really, that whole case is entirely built around the question of whether a non-human can legally be the entire author or owner of a creative work, which has absolutely nothing to do with whether actual living, breathing person who used an AI to create an AI and wants to copyright it themselves can copyright it as themselves as the author. The Zarya of the Dawn thing, yeah, that's something. But nothing the USCO or the courts said in the Thaler case had anything to do with the human offering prompts (and Thaler presented it specifically as not the human offering prompts, but the AI entirely working on its own.) What you presented was not how the USCO presented it. It's like asking if autohotkey + Photoshop can patent or copyright anything and then transfer that to a human. It cant, but that has nothing to do with whether you can copyright stuff made in photoshop. Because I'm kinda lost at how you're misinterpreting this, just to be clear, you linked to an article about the Thaler situation in the probe. Which is why I'm talking about that. BrainDance fucked around with this message at 05:43 on Feb 25, 2023 |
# ? Feb 25, 2023 05:28 |
|
|
# ? May 29, 2024 14:22 |
|
KitConstantine posted:That's not accurate That's not accurate either. The most recent ruling was Midjourney specific and not of AI Art in general. At the heart of the matter is whether Midjourney operates randomly or not. "While additional prompts applied to one of these initial images can influence the subsequent images, the process is not controlled by the user because it is not possible to predict what Midjourney will create ahead of time." https://www.copyright.gov/comp3/ Compendium of U.S. Copyright Office Practices 313.2 Works That Lack Human Authorship Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author. The crucial question is “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” U.S. COPYRIGHT OFFICE, REPORT TO THE LIBRARIAN OF CONGRESS BY THE REGISTER OF COPYRIGHTS 5 (1966) What seems to be the crux of the copyright offices argument is even if you prompt for "person smiling" you won't know what kind of person you will get smiling. I don't think that would hold up in court but that's just my opinion I'm not taking anyone to court. As for intervention, nothing is generated without a person. Even if you set up a large batch and walk away, that's still a person that intervened. And none of this applies to my Stable Diffusion setup at home because especially with control net I can have full control of every part of the creation of an image. I don't have to leave anything to randomness if I don't want to. Everything I make is mine to do with as I please.
|
# ? Feb 25, 2023 05:38 |
Doctor Zero posted:Pop Quiz: i will guess B is the AI because it seems almost sympathetic and the other option gives hard numbers. My theory's that concrete numbers of are more likely to show up in an actual investor report so that the moneybags can be reassured and want to get even more fully invested with their cash money and "wow we cut off 1200 dead weight nonproductive employees!" sounds good for those types. Sympathy is not what they want to hear.
|
|
# ? Feb 25, 2023 05:39 |
|
KwegiboHB posted:"While additional prompts applied to one of these initial images can influence the subsequent I wonder if this means more predictable generations using controlnet etc actually will be found copyrightable? (Especially sketch-to-image with an artist-created sketch...) I guess we'll have to wait and see until someone pays a laywer to try it out.
|
# ? Feb 25, 2023 05:47 |
|
RPATDO_LAMD posted:I wonder if this means more predictable generations using controlnet etc actually will be found copyrightable? (Especially sketch-to-image with an artist-created sketch...) I think that's where we'll end up, but I kinda think (and hope) the standard will be lower than that. There's clearly some line where you have enough human influence that it's copyrightable just where that line is hasn't been decided yet. I actually suspected it to be pretty low because it's so low in other things and I even kind of wonder if Kris had presented the human involvement in a different way Zarya of the Dawn would have turned out differently. I'm sure that when we find that line though, absolutely no one will be happy with it on either side of the debate.
|
# ? Feb 25, 2023 05:56 |
|
https://www.youtube.com/watch?v=AtKlp41atms
|
# ? Feb 25, 2023 06:27 |
|
RPATDO_LAMD posted:I wonder if this means more predictable generations using controlnet etc actually will be found copyrightable? (Especially sketch-to-image with an artist-created sketch...) I wonder what that could mean for things like --xformers.
|
# ? Feb 25, 2023 07:10 |
|
TheWorldsaStage posted:I'm very unclear on the rule broken here. "Has an opinion Kit didn't like"? Is that where we are now? From tripping balls to power tripping in 10 posts or less. What a thread.
|
# ? Feb 25, 2023 10:43 |
|
LASER BEAM DREAM posted:Someone made an OpenPose extension for Automatic1111! I might be dumb, but I can't figure out how to get good results with this. Any pointers?
|
# ? Feb 25, 2023 17:38 |
|
Happy to! Set your range equal to your render resolution. I'm staying at 512x512. Pose the figure as you like. The dots on either side of the head are the ears. You can drag and select multiple joints at the same time and scale them. You can also add additional figures, though it gets tricky when they overlap. Click Send to txt2img and you will be returned to that tab. The figure will be in the Control Net box. You need to select the Openpose model without a preprocesser, as we're sending it raw openpose data. Make adjustments as needed!
|
# ? Feb 25, 2023 18:45 |
|
So I've been playing around with ControlNet a bit and I'm not sure I entirely get it. It seems like basically Img2Img? Like maybe I'm not playing with the settings enough but I still seem to get a fair bit of random distortion.
|
# ? Feb 25, 2023 19:06 |
|
Mr Luxury Yacht posted:So I've been playing around with ControlNet a bit and I'm not sure I entirely get it. It seems like basically Img2Img? Like maybe I'm not playing with the settings enough but I still seem to get a fair bit of random distortion. It's like img2img for posing. Feed it an image of a person, tweak the knobs, toss that into txt2img, and (hopefully) all the people will be posed the same way. There's a million other features (like ^that^ pose editor) but that's basically it.
|
# ? Feb 25, 2023 19:16 |
|
Mr Luxury Yacht posted:So I've been playing around with ControlNet a bit and I'm not sure I entirely get it. It seems like basically Img2Img? Like maybe I'm not playing with the settings enough but I still seem to get a fair bit of random distortion. It's much more accurate to the original than img2img if you use the right model. Plus, as people have mentioned, openpose let's you pose the image very accurately.
|
# ? Feb 25, 2023 19:45 |
|
Cool. I did get weirded out a bit that after I installed a few of the safetensors models, using certain preprocessors and models (openpose, hed) prompted Automatic111 to download some very non safetensors .pth files but I'm assuming those are legit? Other than openpose what have people had the best results with?
|
# ? Feb 25, 2023 20:13 |
|
I’ve used fake scribble to scribble with good success. Canny works well, but will follow significantly more detail from the original, even after working with the thresholds. Work with the weight as well to allow your prompt to deviate more from your ControlNet sample. Edit: you no longer need the ControlNet path models. They have a safetensor version now that’s quite a bit smaller. https://huggingface.co/webui/ControlNet-modules-safetensors/tree/main edit2: Depth is supposed to work with buildings and other architecure, but I have not played with it much yet. LASER BEAM DREAM fucked around with this message at 20:59 on Feb 25, 2023 |
# ? Feb 25, 2023 20:55 |
|
LASER BEAM DREAM posted:
Yeah that's what I was wondering about. I downloaded the openpose safetensor, placed it in the extension's models folder, but when I ran a picture with the model it still downloaded a different .pth model in the regular models folder ("hand_pose_model.pth").
|
# ? Feb 25, 2023 21:04 |
|
Ah, I see. I have those as well. I didn't realize it had done that, or I would have mentioned it! I haven't talked about it in the thread yet, but I do worry about how much code I've pulled and ran in the last week or so. Normally I understand the guts of the stuff I'm working on better than this.
|
# ? Feb 25, 2023 21:28 |
|
LASER BEAM DREAM posted:Ah, I see. I have those as well. I didn't realize it had done that, or I would have mentioned it! Yeah I've scanned it and it doesn't seem to be malicious but I do get a bit uncomfortable with the amount of random code being downloaded. The safetensors change was a big reassurance for me.
|
# ? Feb 25, 2023 21:33 |
|
I have a steamed hams inspired prompts pixaal fucked around with this message at 23:40 on Feb 25, 2023 |
# ? Feb 25, 2023 23:26 |
|
Delightfully devilish, Simeur.
|
# ? Feb 26, 2023 02:09 |
EHBEKE SIMEUR PREIMS THEIFEM "Hohoho, no! Patented Satanic Burgers. Old family recipe!"
|
|
# ? Feb 26, 2023 03:07 |
|
been doing some more stuff with controlnet Johnny Five aces I decided to try messing around with this again, since the technology has advanced so much Professor Xavier Jean Grey Goliath from Gargoyles screenshot from Riven Also, I made a Dagoth Ur version of the Orson Welles Pea Commercial Outake https://www.youtube.com/watch?v=hTkbzfsU_ms -edit forgot I made a couple of Majima too. and here is some nightmare fuel garfield, I made using my cenobite model IShallRiseAgain fucked around with this message at 09:23 on Feb 26, 2023 |
# ? Feb 26, 2023 08:24 |
|
New tool, not sure how much benefit it brings over the existing ones Composer: Creative and Controllable Image Synthesis with Composable Conditions https://damo-vilab.github.io/composer-page/ https://github.com/damo-vilab/composer
|
# ? Feb 26, 2023 10:26 |
|
Doctor Zero posted:Pop Quiz: B is the AI generated one because A sounds more like an actual real HR person/exec who has no empathy or soul.
|
# ? Feb 26, 2023 11:50 |
|
This image was inspired by describing goatse
|
# ? Feb 26, 2023 16:47 |
|
This probably applies to few people, but heads up for those with a 4090. You need to download updated CUDA drivers from nvidia and update your SD install to use it effectively. My 4090 was performing worse than my 3080 until I did this. Guide and link here: https://www.reddit.com/r/StableDiffusion/comments/y71q5k/4090_cudnn_performancespeed_fix_automatic1111/
|
# ? Feb 26, 2023 18:00 |
|
https://www.youtube.com/watch?v=GVT3WUa-48Y https://www.youtube.com/watch?v=_9LX9HSQkWo And on their subscriber website (paywalled) they have an hour-long step-by-step of how they did it.
|
# ? Feb 26, 2023 18:01 |
|
Has anyone tried the facebook language model? Supposedly it runs on the V100 which can be 16gb (or 32 in which case almost everyone is out) https://arstechnica.com/information-technology/2023/02/chatgpt-on-your-pc-meta-unveils-new-ai-model-that-can-run-on-a-single-gpu/
|
# ? Feb 26, 2023 18:02 |
|
Well, someone went and did it. I was thinking to do something like this, but Submit your best AI generated speculative fiction - pays $35 upon acceptance! (I am super temped to write something myself and submit it ) https://starwardshadows.com/submissions-open-beauty-in-blood-drenched-circuits-an-ai-only-anthology/
|
# ? Feb 26, 2023 18:11 |
|
Doctor Zero posted:Pop Quiz: The Contestants: cinnamon rollout posted:I think it's A, I don't really have a good reason though Confusedslight posted:Gut reaction A feels the most ai. BrainDance posted:If the AI is GPT3 I think B. A is kind of a clunky, harder to read run on sentence. GPT3 has a lot of problems with text it generates but those are more human kinds of problems. KillHour posted:A sounds even more horrifically sociopathic somehow so I'm going to say that's the real one. SniperWoreConverse posted:i will guess B is the AI Mercury_Storm posted:B is the AI generated one because A sounds more like an actual real HR person/exec who has no empathy or soul. THE VOTING: Option A - 2 Option B - 4 And the answer is..... . . . . . . . . OPTION C! quote:Despite our record profits, we must also acknowledge the challenging economic conditions and make the tough decision to reduce our workforce and close several stores, while ensuring we support our employees and maintain our competitive edge. Yes, I am a dick, sorry. Option A is from the CEO of News Corp. Option B is from the CEO of Impossible Foods. I originally read B and thought it sounded just like an AI generated blurb. A was a red herring. I really was going to have all three options, but thought it might be interesting to see if anyone guessed "they are both human." But since B won, I was at least right in thinking it sounded like an AI. Chatbots really are best posed to take over executive, marketing, and HR positions. Of course, that will never happen.
|
# ? Feb 26, 2023 18:32 |
|
KwegiboHB posted:Ok here's a brief rundown of what I'll be trying out here. Someone wrote an automatic block merge searcher, seems like it should save you a huge amount of time. I think the only challenge would be tweaking the aesthetic scorer for your target look now. https://medium.com/@media_97267/the-automated-stable-diffusion-checkpoint-merger-autombw-44f8dfd38871
|
# ? Feb 26, 2023 18:41 |
|
I know a lot of people who wrote corporate blogs and stuff to generate organic SEO and now they're using AI and so it's both written and read only by algorithms
|
# ? Feb 26, 2023 18:50 |
|
i am maybe three days into playing with stable diffusion on a gaming laptop and i absolutely cannot fathom how hardware small enough to have its own pocket in a backpack can crank out machine hallucinations of this quality ive managed to install sd and automatic1111 and feed it text prompts and then ive found some safetensors from civitai to yell at in the negative prompt box to try to back it away from huge anime tiddies seriously why is everything giant cans never mind i already know the answer and its depressing i saw the openpose stuff above and kind of got that working but i feel like ive completely missed some other basics along the way one of the party tricks id like to get better at is changing the style of a photo to oil painting or dagguerotype or whatever and i know that img2img in automatic1111 is the core of that but this tech moves so quickly that im having one devil of a time finding documentation or tutorials so if someone has the time and inclination to point me in a direction id be greatly appreciative this stuff is startlingly fun once you get over the initial what the gently caress am i doing phase just choose a model and make a text prompt that does something interesting queue up a batch of 100 and come back later to a picture album of stuff that is wildly outside my ability to create on my own
|
# ? Feb 26, 2023 19:39 |
|
Sexual Lorax posted:[...] = busalover posted:New tool, not sure how much benefit it brings over the existing ones Havent had time to check but for very precise transformations, this seems tailormade?
|
# ? Feb 26, 2023 19:55 |
|
Objective Action posted:Someone wrote an automatic block merge searcher, seems like it should save you a huge amount of time. I think the only challenge would be tweaking the aesthetic scorer for your target look now. Oh, thank you thank you thank you! I know I'll have to automate this whole process but this is all too new so I don't know all of the steps that will need automating yet. I'm ok with making these first steps manually but doing this every time is for the birds. This'll save a lot of effort so thank you again! This is cool as hell... BROTHER!!!!!!
|
# ? Feb 26, 2023 21:28 |
IShallRiseAgain posted:screenshot from Riven
|
|
# ? Feb 26, 2023 21:54 |
|
busalover posted:New tool, not sure how much benefit it brings over the existing ones Sketch + Semantic forGoatse is gonna be a thing.
|
# ? Feb 26, 2023 23:33 |
|
Fuzz posted:Sketch + Semantic forGoatse is gonna be a thing. I don't think it's hard to invoke the idea of goatse with prompting only
|
# ? Feb 27, 2023 00:11 |
|
mobby_6kl posted:Has anyone tried the facebook language model? Supposedly it runs on the V100 which can be 16gb (or 32 in which case almost everyone is out) I submitted a request for the full model with my teacher email as a kind of "please do some charity and gimme the model, we got a computer science club" I doubt I'll get in but it's worth a shot! Even if it needs 32GB, this is where that flexgen stuff might come in, the requirements aren't too insane, we just need to push it down a bit. That combination of smaller, more efficient but still capable models + bigger memory optimizations and we can get to the SD of image gen AI. I've got a few models made for my Daoism/Stoicism merge model to make the book (though I still gotta get around to taking all the pictures of myself for the dreambooth models.) Still zeroing in on the best configuration for the script to generate text (setting num_beams greater than 1 is supposed to just improve the output but take longer to generate, but it does the opposite?) But I got some good stuff. Should have been documenting it better because now I cant remember exactly which model gave me this but this was my favorite, the philosophy of someone trained on Daoism and Stoicism but not on the difference between to two quote:The world has been born and will die. Nothing can stop it. It will continue for all time; there is no need to worry. For the Chinese translation, I wont post that because almost no one can read Chinese but I could do a real cool thing with chatgpt. I asked it to translate it, no problem but nothing special. Then I asked it to translate it in the style of Zhuangzi, it did that but neither I nor my wife could read it because it was in Classical Chinese lol. So we asked it to translate it in the style of Zhuanzi but in a way modern Chinese speakers could understand and it succeeded at that.
|
# ? Feb 27, 2023 01:06 |
|
|
# ? May 29, 2024 14:22 |
|
I've been experimenting with using AI-generated images as a basis for pixel art, the eventual goal is to make myself as self-sufficient as possible as an indie game dev while cutting down the workflow as much as I possibly can so I don't spend a decade working on this game. None of these are raw AI generations but they're all primarily AI-generated, then converted into pixel-art and colorized manually. Also very few of these are actual art assets for the game I'm working on, just exploratory testing of processes. My first real attempt produced these, which are neat, but they were just based on generated images with nothing special in the prompt and by using GIMP filters to whittle them down to something vaguely pixel-art-ish, I didn't have a process worked out yet. Then I spent a while on environments, here are a couple distinct styles. Everthing is grayscale because it makes filter-based conversion to pixel art much better, which means less manual pixel-by-pixel edits. Additionally, MJ generally produces far better images when it's limited to monochrome. The ones on the right are the more unique style of AI art I've been working on and are close to the final assets I plan to use in game development. The ones on the left are a more universal generic style, though I had a hard time keeping a consistent aesthetic between all of them - that just comes down to needing to generate more images and be more choosy about which ones I use though, because there's one particular style throughout there which worked phenomenally. Working on these I was focused on figuring out promptable styles that convert well to pixel art. The best results surprisingly came without any game- or pixelart-related verbiage in the prompt, and mostly came from stuffing a bunch of mystic emotive technobabble in there. These required some manual edits to the art before applying filters but overall not much was done to them - I would do some manual work on them before using them in a finished product, such as hand-colorizing them: Here's the raw AI original for comparison: Along the way with environments I finally figured out how to get Midjourney to produce images of planets so I made some pixel art planets (which still need manual outlining and cleaning up the edges, note to self): Then I tried my hand at character sprites for a while. These were explicitly prompted as "spritesheets" and required a lot of manual editing. Here's a bunch of the unused raw generations (post-filtering), most of the characters above are absent from this image because I was pulling them out of this one to colorize and them: Details as small as these characters are hard for MJ to handle so these required a lot of deleting exceess black pixel-by-pixel (it blends the background color into the outlines and causes all kinds of problems), manually drawing outlines, correcting anatomy, etc. The results weren't really worth the effort but knowing what I know now I think I could do a much better job. Some character portraits - the 3 in the middle show the progression from raw AI image (resized) -> GIMP filters applied and a small amount of manual touchups done on PC -> Colorized and outlines added in Procreate. It still needs some work but these were just a proof of concept, I didn't make the effort to prompt for unique art styles or characters. Today I've been experimenting with using AI to make UI mockups for things like spell selection UIs, shops, inventories, levelup prompts, etc. The results aren't fantastic but there's some kinda neat stuff. Mostly all the details are too small and were lost during pixelization. When I was working on these I wasn't envisioning them being used in an actual game, but more like a Forums CYOA thread or something - so the fact that they don't look like functional UIs and are more ornamental is okay. They're currently completely untouched other than deleting backgrounds and GIMP filters. I think they would have worked better overall if I had upscaled them before pixelizing, so I could keep the same pixel scale while having much larger UI frames. anyway here's a thing: deep dish peat moss fucked around with this message at 01:45 on Feb 27, 2023 |
# ? Feb 27, 2023 01:35 |