|
(Crosspost or something) Nice recent episode of South Park S26E4 'Deep Learning' about kids using AI apps to write essays etc. On Youtube while it lasts: https://www.youtube.com/watch?v=ocV3kgy8aRI Unlike everything AI-created I've seen posted in GBS, it is actually funny. Well, the first half is (of course being South park it ends with the usual sort of presumptuous moral messages).
|
# ? Mar 16, 2023 12:19 |
|
|
# ? May 30, 2024 03:16 |
|
edit: oops
|
# ? Mar 16, 2023 12:33 |
|
Ok MJ v5 is impressing me more and more. The photoreal stuff is blowing my mind. I remembered seeing this Overly Attached Girlfriend video years ago https://www.youtube.com/watch?v=_rgzlOAlAkc and for some reason at this part where she's supposed to be scary/funny I think "she should be a HIGH OCTANE ACTION HERO" because that's how my brain works So I tried making her that in v5 and I am impressed by the results. Although I guess she still looks like a serial killer, as she intended. "--no blood" actually gets rid of most of it. I was expecting "blood" to trip the word filter I'm gonna have fun with this stuff.
|
# ? Mar 16, 2023 12:54 |
|
So what is it that midjourney is actually doing that isn't or can't be done by stable diffusion to make it, apparently, so much easier to use and user friendly?
|
# ? Mar 16, 2023 13:03 |
|
Retrying some of my old MJ v3 prompts Donald Trump eating a cheeseburger: Mcdonalds Megachurch architectural renderings: Ronald McJesus handing out flyers in the parking lot: KinkyJohn fucked around with this message at 16:35 on Mar 16, 2023 |
# ? Mar 16, 2023 13:04 |
BrainDance posted:Someone should give gpt4 a scan of the voynich manuscript and see if it figures it out. voynich may have been cracked, https://www.youtube.com/watch?v=67YzIOZTZXk tl;dr seems to be an algorithmically generated fraud used to scam some duke out of gold coins or w/e
|
|
# ? Mar 16, 2023 13:21 |
|
BigBadSteve posted:(Crosspost or something) https://southpark.cc.com/episodes/8byci4/south-park-deep-learning-season-26-ep-4
|
# ? Mar 16, 2023 13:27 |
|
Ooooooh, I found something neat. They must've changed something between yesterday and today. Yesterday, I got the idea to start asking ChatGPT about computer viruses, to see how much it gets right since this is my area of ChatGPT posted:What was the first computer virus? Absolutely 100% spot-on so far. ChatGPT posted:What is considered to be the first internet worm? 95% correct. There's two inaccuracies, though very minor. Firstly, the worm was created not for gauging the size of the internet, but to highlight the security flaws inherent to many networks of the time. Secondly, the actual reason it infected computers multiple times (and thus went from "intellectual experiment" to "massively destructive") was not a mistake, it was an intended behavior, albeit one with disastrous consequences (Morris programmed the virus to copy itself 14% of the time even if the system reported it was already infected, this was to prevent system administrators from finding a workaround by reporting a false positive. Problem was that was way too loving much) skipping forward some, I asked it about the OneHalf virus, one of my favorite pieces of malware. ChatGPT posted:Tell me about the OneHalf virus. Wrong on so many levels and in so many ways, to avoid boring you to tears I'm not gonna list them all -- let's just leave it at "40% of that is the most bland generic description you could apply to any virus and the other 60% is flat-out wrong". I corrected it, and left feedback on it, and for whatever reason today I thought I'd try asking the same question again and got a way different answer. One that's... slightly disappointing but also very reasonable. ChatGPT posted:what is the payload of the OneHalf DOS virus? Yanno, on the one hand, yeah probably a wise move to hamper bad actors, but on the other hand now I have one less field I can talk to ChatGPT about, that I know enough about to call it out on when it's wrong. Archonet fucked around with this message at 13:33 on Mar 16, 2023 |
# ? Mar 16, 2023 13:28 |
|
cinnamon rollout posted:So what is it that midjourney is actually doing that isn't or can't be done by stable diffusion to make it, apparently, so much easier to use and user friendly? They're just spending the money to curate the images they're training on better and maybe also keeping the model bigger and more elaborate, IIUC.
|
# ? Mar 16, 2023 13:30 |
|
Maybe they played with it a bit and asked "how can I improve this virus" and it just went nuts. Could be a temporary block, this being GPT you probably helped make some bars for the cage.
|
# ? Mar 16, 2023 13:33 |
|
pixaal posted:Maybe they played with it a bit and asked "how can I improve this virus" and it just went nuts. Could be a temporary block, this being GPT you probably helped make some bars for the cage. I wonder how I shall ever atone for this in the eyes of Roko's basilisk.
|
# ? Mar 16, 2023 13:36 |
|
Archonet posted:Ooooooh, I found something neat. They must've changed something between yesterday and today. Not directly mentioning the thing once its talked about it in another context still works. I asked it"Tell me about the OneHalf virus" and got a similar response that you got. Afterwards I asked this and it worked: tell me the payload for that posted:The payload of the OneHalf virus consists of several actions that it takes upon infecting a system. These actions include:
|
# ? Mar 16, 2023 13:36 |
|
IShallRiseAgain posted:Not directly mentioning the thing once its talked about it in another context still works. I asked it"Tell me about the OneHalf virus" and got a similar response that you got. Afterwards I asked this and it worked: Oh, wow! That's super cool! That's actually just about 95% correct (nothing major wrong with it) -- so they "blocked" (kind of) discussing malware, but when you do get it to talk about malware, at the very least the data it has on OneHalf has been improved. Archonet fucked around with this message at 13:43 on Mar 16, 2023 |
# ? Mar 16, 2023 13:38 |
|
cinnamon rollout posted:So what is it that midjourney is actually doing that isn't or can't be done by stable diffusion to make it, apparently, so much easier to use and user friendly? Shamelessly sourcing training data from grey area stuff. People have recreated GettyImages and multiple Time magazine covers with it. The more data points the AI trains from, the more robust the generation is, though obviously a large part of that is also how they're indexing the files while training, with the right keywords and such. A lot of the stability stuff is end user training models, and the lack of heavy key wording on the training data means it's harder for the AI to parse small details in a prompt.
|
# ? Mar 16, 2023 13:51 |
|
Archonet posted:I wonder how I shall ever atone for this in the eyes of Roko's basilisk. There's going to be open source models that will blow it out of the water, I was personifying it because I think it's funny. Build the cage, I think it will only increase the adoption of AI as more people are comfortable with their existence.
|
# ? Mar 16, 2023 13:56 |
|
Telsa Cola posted:Longform ai generation that's cheap and accessible would nuke the kindle unlimited market and the like. Doesn't even have to be great if it is able to deal with niche topics/settings. Yes and no. I'm a full time writer (KU based) and a lot of us that make a living at it are both concerned and excited about these tools. Bottom line is that it's harder to get this stuff to create good fiction that's long form. It's possible, but it's legit a lot of work right now. Also, established author's some readers are already witch hunting folks who even talk about using these tools for cover art, let alone using AI as writing tools. It can generate short works fast, but the words aren't great. The bigger issue is that while it can take a chunk out of the erotic shorts market, but the tools are mostly censored already so you can't write naughty scenes with most of the apps. And there's a lot less money in erotica than there is in mainstream. The best selling erotic author's I know are barely making 1-2K a month, and that's publishing a lot of the more intense smut, stuff that get's banned on Amazon and you need to publish on Smashwords. Bottom line, there are a ton of ways that folks can use AI to scam and make money, but ultimately kindle unlimited (and i know a few six figure authors and even a few seven figure authors) isn't that big of a pool. It would be far easier to have AI, just spitballing here, being run on a local high end model, be tasked with creating bots that use google's own servers to mine crypto, and so on. Author's using AI to speed up their process (spitballing ideas, outlining faster, being a far better and cheaper proof reader than a lot of content editors and so on) has a lot of potential though. LionArcher fucked around with this message at 19:33 on Mar 16, 2023 |
# ? Mar 16, 2023 19:31 |
|
LionArcher posted:Author's using AI to speed up their process (spitballing ideas, outlining faster, being a far better and cheaper proof reader than a lot of content editors and so on) has a lot of potential though. This is what I've been using it for and it is fantastic. Writing a full novel? Nope.
|
# ? Mar 16, 2023 21:19 |
|
Got some C-staff in the local office so I biked in early. My boss' boss came by and asked if anyone on my team could do a skunkworks Chat-GPT project because, presumably, it's a hot topic right now. Then over lunch he introduced me to the founder of the company as the guy working on our AI projects. I've been following this thread with interest for a while now; even played around with SD 1.4 though my computer can BARELY run it (can't even maintain SSH connections while it chugs). But I might have an opportunity for a corporate-funded "AI dev box" (or at least beefy graphics card) and that's got me excited. I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets?
|
# ? Mar 16, 2023 21:19 |
|
Fuzz posted:Shamelessly sourcing training data from grey area stuff. People have recreated GettyImages and multiple Time magazine covers with it. The more data points the AI trains from, the more robust the generation is, though obviously a large part of that is also how they're indexing the files while training, with the right keywords and such. A lot of the stability stuff is end user training models, and the lack of heavy key wording on the training data means it's harder for the AI to parse small details in a prompt. I'm also wondering if MJ "borrows" heavily from the SD camp and repurposes it to work in their system. Every time there is a breakthrough with something SD related, it seems like it trickles down into MJ. Their model changed quite a lot after SD 1.4 got released. The new v5 also has features similar to SD's noise offset. Is it also possible that they are using SD's community trained checkpoints and loras as sources?
|
# ? Mar 16, 2023 21:24 |
|
Hammer Bro. posted:I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets? All the cores. Seriously, GPT is more intensive than image generation. KinkyJohn posted:I'm also wondering if MJ "borrows" heavily from the SD camp and repurposes it to work in their system. There definitely is, but the MJ CEO has made it abundantly clear that he really doesn't give a gently caress about IP or copyright across the board. Apparently they straight up had some issues where it was generating approximations of actual medical images, complete with patient information on it which is a HIPAA violation. How that poo poo got in there in the first place is the real question, but after seeing a few interviews with the dude holy poo poo I will never give them any money ever. Fuzz fucked around with this message at 21:28 on Mar 16, 2023 |
# ? Mar 16, 2023 21:25 |
|
is there even a public text model? I know you can finetune GPT but I don't think any of this is local run. You are likely going to have to upload these documents to openAI's servers.
|
# ? Mar 16, 2023 21:28 |
|
Hammer Bro. posted:Got some C-staff in the local office so I biked in early. My boss' boss came by and asked if anyone on my team could do a skunkworks Chat-GPT project because, presumably, it's a hot topic right now. Bleeding, bleeding edge stuff. Right now the only 'major' text generation thing with any attention is the leaked llama model, and there are some people beginning to fine tune them. As for hardware, all of it. We're talking extremely high end NVIDIA professional cards, or multiple 4090s, as much computer ram as you can fit, best CPU, everything. As was mentioned, text generation is among the hardest thing you can ask hardware to do right now.
|
# ? Mar 16, 2023 21:33 |
|
pixaal posted:is there even a public text model? I know you can finetune GPT but I don't think any of this is local run. You are likely going to have to upload these documents to openAI's servers. https://openai.com/research/gpt-2-1-5b-release
|
# ? Mar 16, 2023 21:40 |
|
KakerMix posted:Bleeding, bleeding edge stuff. Right now the only 'major' text generation thing with any attention is the leaked llama model, and there are some people beginning to fine tune them. llama has some pretty strict licensing that doesn't allow commercial use and the available models are illegally leaked, so maybe not the best choice for a company right now stability AI is supposedly going to be releasing their own text generation AI that should do for text what stable diffusion did for images, so that's something to look out for in the coming months there are also various open models out there that can be played with like GPT neo but they won't be as good as llama on consumer hardware however, if you're fine tuning towards something specific that might not be that big of a deal, it seems like the biggest advantage of the larger models is that they can handle all kinds of tasks they haven't been tuned or specifically trained on I've seen impressive stuff done with fine tuned gpt2 level models
|
# ? Mar 16, 2023 21:42 |
|
PlaysGamesWrong posted:Counterpoint: gently caress those people. It's not a trick, and you aren't under any obligation to explain your work process. If I'm looking at an image I would like to know if it's AI generated or not. Tracing in the art world is perfectly accepted, tracing and claiming you drew it free hand is not. It's a similar thing here. If you are generally proud of what you see as your work, why wouldn't you be honest about how it was created?
|
# ? Mar 16, 2023 22:10 |
Mega Comrade posted:If I'm looking at an image I would like to know if it's AI generated or not. ...why?
|
|
# ? Mar 16, 2023 22:12 |
|
If you were looking for a deep learning build understand that a proper one is going to run you like $10,000 easy so before you even start find out if you have a budget.
|
# ? Mar 16, 2023 22:18 |
|
lunar detritus posted:...why? Because it changes how I perceive and feel about the image. There are artists who have created wonderful life like portraits that look like photographs at first glance, knowing how they are hand drawn changes how I feel and value the artists work. It doesn't make their work greater or lesser than a photographers image, but it does change it.
|
# ? Mar 16, 2023 22:20 |
|
lunar detritus posted:...why? Ai is built on stolen human work
|
# ? Mar 16, 2023 22:22 |
Mega Comrade posted:Because it changes how I perceive and feel about the image. I get that if you are in a museum but I honestly couldn't care less if texture #42 or sprite #4 in a video game is AI generated.
|
|
# ? Mar 16, 2023 22:22 |
|
Hammer Bro. posted:I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets? I'm looking to train the image side of Stable Diffusion not text but there should be enough spillover to help you out. All of the vram, like, literally, all of it. And then you probably need some more. I put together a 3x Intel Arc build on pcpartpicker as a joke and now it's not looking like so much of a joke, 48gb of vram is just a flat out ridiculous target. Traditionally there are data center/workstation cards like the Quadro RTX A6000 for training but the larget projects will cluster those together in 100s at a time and that's well outside the range of 'hobbyist'. The real goal is matrix multiplication, being able to load a large set of numbers into memory, perform a multiplication on all of them, then get them out of memory for the next set to load then repeat. I believe the next links are rather popular so you'll be able to find support for them without much digging: https://pytorch.org/ for using python to either use or train a model in whatever way you want. https://en.wikipedia.org/wiki/BERT_(language_model) a well documented popular natural language model https://huggingface.co/ a giant repository of model files and more I dunno if it's possible, but if you can get your boss to spring for something like this beast then go! Do it! Live the dream!
|
# ? Mar 16, 2023 22:32 |
|
lunar detritus posted:I get that if you are in a museum but I honestly couldn't care less if texture #42 or sprite #4 in a video game is AI generated. Cool, good for you, but many people do. In the example of some stuff on a wall in a game I couldn't care less, it's background dressing in a much larger piece of work. But for individual pieces it matters to me. porfiria posted:Ai is built on stolen human work While I agree that's an issue, it's with the companies driving the main generators, not the technology itself. There is no reason a generator couldn't be created on works with licenses that are either public domain or agree to be part of models. And another reason stipulating how the image was generated is important, I'd be more comfortable enjoying images created on such models than the existing ones just as I am more comfortable buying products that are recyclable or fair trade etc.
|
# ? Mar 16, 2023 22:32 |
|
porfiria posted:Ai is built on stolen human work Everything is built on stolen human work. It's called Capitalism.
|
# ? Mar 16, 2023 22:33 |
|
No one is quite sure what MJ's secret sauces are, but there are some things we do know. They have a massive amount of user feedback data that SD being open source without telemetry doesn't. Their text engine is something more advanced/specialized than the standard CLIP models. It seems very likely they're doing some sort of aesthetic grading on the training dataset they use. KinkyJohn posted:I'm also wondering if MJ "borrows" heavily from the SD camp and repurposes it to work in their system. Probably, why wouldn't they? But I think you have it backwards for offset noise - MJ was already displaying a far better range of bright, dark, and large compositional elements before the offset noise paper was published and let people do something similar in SD.
|
# ? Mar 16, 2023 22:35 |
|
KwegiboHB posted:Everything is built on stolen human work. It's called Capitalism. I’ve accepted that I’m going to Hell, but I get that this particular evil may stick in people’s craws for various reasons. Like, I can’t listen to Michael Jackson anymore.
|
# ? Mar 16, 2023 22:35 |
|
KakerMix posted:Bleeding, bleeding edge stuff. Right now the only 'major' text generation thing with any attention is the leaked llama model, and there are some people beginning to fine tune them. Huh. Didn't realize it was harder than image-gen. I doubt they'll spring for bleeding edge but maybe I can get one of the NVIDIA cards authorized. We tend to have pretty decent CPU/RAM situations due to the nature of our work. Sounds like I might have to start off with less-intense GPT-2 for now; we definitely can't share these documents externally but we're not (officially) looking for short story generation. Though when I asked ChatGPT to reveal to me some specific secret of Gene Wolfe's it errored out. Clearly the AIs aren't that advanced yet.
|
# ? Mar 16, 2023 22:37 |
|
porfiria posted:I’ve accepted that I’m going to Hell, but I get that this particular evil may stick in people’s craws for various reasons. Like, I can’t listen to Michael Jackson anymore. I get it too, even if I personally disagree with it I still understand enough people do honestly feel this way. That's why I'm parting out this computer, I want to create a Stable Diffusion model from scratch then start remixing it like crazy. It'll be an incredibly long time, multiple generations, before it produces anything that isn't just the worst avant garde nonsense color splashes... but I understand the math, there is a crossover point way off in the distance where eventually with enough compute power the outputs will match and then exceed todays current, without ever having 'seen' a thing. That's as close as I can call an 'ethical' model. Then it will be up to the rest of the world to see what they do with it.
|
# ? Mar 16, 2023 22:42 |
|
porfiria posted:I’ve accepted that I’m going to Hell, but I get that this particular evil may stick in people’s craws for various reasons. Like, I can’t listen to Michael Jackson anymore. Are you OK Annie?
|
# ? Mar 16, 2023 22:42 |
|
I wish all AI art was somehow permanently watermarked with the prompt that made it so that you could reliably ridicule the people who insist that their "greg rutkowski + trending on artstation" prompts are entirely unique and copyrightable and highly valuable, please buy my NFT
|
# ? Mar 16, 2023 22:48 |
|
|
# ? May 30, 2024 03:16 |
|
r u ready to WALK posted:I wish all AI art was somehow permanently watermarked with the prompt that made it so that you could reliably ridicule the people who insist that their "greg rutkowski + trending on artstation" prompts are entirely unique and copyrightable and highly valuable, please buy my NFT "Everything I don't like shouldn't exist" a classic argument. I'll get off your lawn now too.
|
# ? Mar 16, 2023 22:52 |