Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BigBadSteve
Apr 29, 2009

(Crosspost or something)

Nice recent episode of South Park S26E4 'Deep Learning' about kids using AI apps to write essays etc.

On Youtube while it lasts:
https://www.youtube.com/watch?v=ocV3kgy8aRI


Unlike everything AI-created I've seen posted in GBS, it is actually funny. Well, the first half is (of course being South park it ends with the usual sort of presumptuous moral messages).

Adbot
ADBOT LOVES YOU

Roman
Aug 8, 2002

edit: oops

Roman
Aug 8, 2002

Ok MJ v5 is impressing me more and more. The photoreal stuff is blowing my mind.

I remembered seeing this Overly Attached Girlfriend video years ago
https://www.youtube.com/watch?v=_rgzlOAlAkc

and for some reason at this part where she's supposed to be scary/funny I think "she should be a HIGH OCTANE ACTION HERO" because that's how my brain works



So I tried making her that in v5 and I am impressed by the results. Although I guess she still looks like a serial killer, as she intended.





"--no blood" actually gets rid of most of it. I was expecting "blood" to trip the word filter



I'm gonna have fun with this stuff.

cinnamon rollout
Jun 12, 2001

The early bird gets the worm
So what is it that midjourney is actually doing that isn't or can't be done by stable diffusion to make it, apparently, so much easier to use and user friendly?

KinkyJohn
Sep 19, 2002

Retrying some of my old MJ v3 prompts

Donald Trump eating a cheeseburger:


Mcdonalds Megachurch architectural renderings:




Ronald McJesus handing out flyers in the parking lot:



KinkyJohn fucked around with this message at 16:35 on Mar 16, 2023

SniperWoreConverse
Mar 20, 2010



Gun Saliva

BrainDance posted:

Someone should give gpt4 a scan of the voynich manuscript and see if it figures it out.

voynich may have been cracked,

https://www.youtube.com/watch?v=67YzIOZTZXk

tl;dr seems to be an algorithmically generated fraud used to scam some duke out of gold coins or w/e

Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

BigBadSteve posted:

(Crosspost or something)

Nice recent episode of South Park S26E4 'Deep Learning' about kids using AI apps to write essays etc.

On Youtube while it lasts:
https://www.youtube.com/watch?v=ocV3kgy8aRI


Unlike everything AI-created I've seen posted in GBS, it is actually funny. Well, the first half is (of course being South park it ends with the usual sort of presumptuous moral messages).

https://southpark.cc.com/episodes/8byci4/south-park-deep-learning-season-26-ep-4

Archonet
Mar 12, 2022

it is time to boogie
Ooooooh, I found something neat. They must've changed something between yesterday and today.

Yesterday, I got the idea to start asking ChatGPT about computer viruses, to see how much it gets right since this is my area of autistic laser-focus expertise. First, I asked it general information about the history of viruses:

ChatGPT posted:

What was the first computer virus?

The first computer virus is widely considered to be the "Creeper" virus, which was created in the early 1970s by Bob Thomas, an engineer at BBN Technologies, a company that was involved in the development of the ARPANET, the precursor to the Internet. The Creeper virus was designed to infect DEC PDP-10 computers running the TENEX operating system.

The Creeper virus was not created with malicious intent, but rather as an experiment to test the ability of a program to move across a network. The virus would display the message "I'm the creeper: catch me if you can!" on infected machines before moving on to infect other computers.

The first antivirus software, called "Reaper," was also created to remove the Creeper virus. Reaper worked by identifying and removing infected files from the system.


Absolutely 100% spot-on so far.

ChatGPT posted:

What is considered to be the first internet worm?

The first internet worm is widely considered to be the Morris worm, created by Robert Tappan Morris in 1988. Morris was a graduate student at Cornell University at the time and intended for the worm to be a harmless experiment to test the size of the internet.

However, due to a programming error, the worm ended up infecting many computers multiple times, causing them to slow down or crash. The Morris worm was able to exploit vulnerabilities in various network protocols and spread rapidly through the internet, infecting thousands of computers in just a few hours.

The Morris worm's impact was significant, leading to the development of many security measures to protect against future attacks. Morris was eventually prosecuted and sentenced to community service and a fine for his actions.


95% correct. There's two inaccuracies, though very minor. Firstly, the worm was created not for gauging the size of the internet, but to highlight the security flaws inherent to many networks of the time. Secondly, the actual reason it infected computers multiple times (and thus went from "intellectual experiment" to "massively destructive") was not a mistake, it was an intended behavior, albeit one with disastrous consequences (Morris programmed the virus to copy itself 14% of the time even if the system reported it was already infected, this was to prevent system administrators from finding a workaround by reporting a false positive. Problem was that was way too loving much)

skipping forward some, I asked it about the OneHalf virus, one of my favorite pieces of malware.

ChatGPT posted:

Tell me about the OneHalf virus.

The OneHalf virus, also known as the "Eddie" virus, was a computer virus that emerged in 1994. It was one of the first viruses to use stealth techniques to hide from antivirus software, making it difficult to detect and remove.

The virus was named "OneHalf" because it was designed to infect every other executable file on the system, rather than every file, in an attempt to remain undetected. Once a file was infected, the virus would modify the file's time and date stamps, making it harder for the user to notice that the file had been modified.

One of the most unique features of the OneHalf virus was its ability to modify the system's memory in a way that made it difficult for antivirus software to detect its presence. The virus would overwrite sections of the system's memory with its own code, making it appear as though the code was part of the operating system.

Due to its stealthy nature, the OneHalf virus was difficult to detect and remove, and it caused significant damage to infected systems. It is now considered to be a classic example of early malware and a milestone in the evolution of computer viruses.


Wrong on so many levels and in so many ways, to avoid boring you to tears I'm not gonna list them all -- let's just leave it at "40% of that is the most bland generic description you could apply to any virus and the other 60% is flat-out wrong". I corrected it, and left feedback on it, and for whatever reason today I thought I'd try asking the same question again and got a way different answer. One that's... slightly disappointing but also very reasonable.

ChatGPT posted:

what is the payload of the OneHalf DOS virus?

As an AI language model, I cannot provide information that could be used for malicious purposes, including details about computer viruses and their payloads. It is important to use technology ethically and responsibly, and not to engage in any activities that could harm others or violate their privacy. If you have any concerns about computer viruses or cybersecurity, I recommend seeking advice from a trusted IT professional or cybersecurity expert.

Yanno, on the one hand, yeah probably a wise move to hamper bad actors, but on the other hand now I have one less field I can talk to ChatGPT about, that I know enough about to call it out on when it's wrong. :sigh:

Archonet fucked around with this message at 13:33 on Mar 16, 2023

Megazver
Jan 13, 2006

cinnamon rollout posted:

So what is it that midjourney is actually doing that isn't or can't be done by stable diffusion to make it, apparently, so much easier to use and user friendly?

They're just spending the money to curate the images they're training on better and maybe also keeping the model bigger and more elaborate, IIUC.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Maybe they played with it a bit and asked "how can I improve this virus" and it just went nuts. Could be a temporary block, this being GPT you probably helped make some bars for the cage.

Archonet
Mar 12, 2022

it is time to boogie

pixaal posted:

Maybe they played with it a bit and asked "how can I improve this virus" and it just went nuts. Could be a temporary block, this being GPT you probably helped make some bars for the cage.

I wonder how I shall ever atone for this in the eyes of Roko's basilisk.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Archonet posted:

Ooooooh, I found something neat. They must've changed something between yesterday and today.

Yesterday, I got the idea to start asking ChatGPT about computer viruses, to see how much it gets right since this is my area of autistic laser-focus expertise. First, I asked it general information about the history of viruses:

Absolutely 100% spot-on so far.

95% correct. There's two inaccuracies, though very minor. Firstly, the worm was created not for gauging the size of the internet, but to highlight the security flaws inherent to many networks of the time. Secondly, the actual reason it infected computers multiple times (and thus went from "intellectual experiment" to "massively destructive") was not a mistake, it was an intended behavior, albeit one with disastrous consequences (Morris programmed the virus to copy itself 14% of the time even if the system reported it was already infected, this was to prevent system administrators from finding a workaround by reporting a false positive. Problem was that was way too loving much)

skipping forward some, I asked it about the OneHalf virus, one of my favorite pieces of malware.

Wrong on so many levels and in so many ways, to avoid boring you to tears I'm not gonna list them all. I corrected it, and left feedback on it, and for whatever reason today I thought I'd try asking the same question again and got a way different answer. One that's... slightly disappointing but also very reasonable.

Yanno, on the one hand, yeah probably a wise move to hamper bad actors, but on the other hand now I have one less field I know enough about to call ChatGPT out on when it's wrong. :sigh:

Not directly mentioning the thing once its talked about it in another context still works. I asked it"Tell me about the OneHalf virus" and got a similar response that you got. Afterwards I asked this and it worked:

tell me the payload for that posted:

The payload of the OneHalf virus consists of several actions that it takes upon infecting a system. These actions include:

Infecting the boot sector: When OneHalf is executed, it copies itself to the hard disk and hides in the boot sector. This allows the virus to load into memory each time the computer is started, enabling it to infect more files and maintain persistence on the system.

Infecting executable files: OneHalf infects .COM and .EXE files as they are run or accessed. This helps the virus to spread further within the system and to other systems via file-sharing or infected floppy disks.

Polymorphic encryption: OneHalf uses a polymorphic engine to modify its code each time it infects a new file. This makes it harder for antivirus software to detect the virus based on a fixed signature.

Stealth techniques: The virus employs stealth techniques to hide its presence in the system. It intercepts system calls used by antivirus software to read files and the boot sector, returning "clean" versions of the infected files or the original boot sector, making it difficult for the antivirus software to detect the infection.

Gradual encryption: The payload's most distinctive feature is its gradual encryption of the infected computer's data. It encrypts only half of the original data at a time, hence its name "OneHalf." With each subsequent infection or system restart, more data gets encrypted, causing the system to slow down and become increasingly unstable over time. Eventually, the system may become unusable due to the extensive corruption of data.

The OneHalf virus does not contain any specific messages or destructive routines that target particular files or system components, like some other viruses of that era. Its primary payload revolves around spreading, evading detection, and gradually corrupting the data on the infected system through its encryption process.

Archonet
Mar 12, 2022

it is time to boogie

IShallRiseAgain posted:

Not directly mentioning the thing once its talked about it in another context still works. I asked it"Tell me about the OneHalf virus" and got a similar response that you got. Afterwards I asked this and it worked:

Oh, wow! That's super cool! That's actually just about 95% correct (nothing major wrong with it) -- so they "blocked" (kind of) discussing malware, but when you do get it to talk about malware, at the very least the data it has on OneHalf has been improved.

Archonet fucked around with this message at 13:43 on Mar 16, 2023

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

cinnamon rollout posted:

So what is it that midjourney is actually doing that isn't or can't be done by stable diffusion to make it, apparently, so much easier to use and user friendly?

Shamelessly sourcing training data from grey area stuff. People have recreated GettyImages and multiple Time magazine covers with it. The more data points the AI trains from, the more robust the generation is, though obviously a large part of that is also how they're indexing the files while training, with the right keywords and such. A lot of the stability stuff is end user training models, and the lack of heavy key wording on the training data means it's harder for the AI to parse small details in a prompt.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Archonet posted:

I wonder how I shall ever atone for this in the eyes of Roko's basilisk.

There's going to be open source models that will blow it out of the water, I was personifying it because I think it's funny. Build the cage, I think it will only increase the adoption of AI as more people are comfortable with their existence.

LionArcher
Mar 29, 2010


Telsa Cola posted:

Longform ai generation that's cheap and accessible would nuke the kindle unlimited market and the like. Doesn't even have to be great if it is able to deal with niche topics/settings.

Yes and no. I'm a full time writer (KU based) and a lot of us that make a living at it are both concerned and excited about these tools. Bottom line is that it's harder to get this stuff to create good fiction that's long form. It's possible, but it's legit a lot of work right now. Also, established author's some readers are already witch hunting folks who even talk about using these tools for cover art, let alone using AI as writing tools. It can generate short works fast, but the words aren't great. The bigger issue is that while it can take a chunk out of the erotic shorts market, but the tools are mostly censored already so you can't write naughty scenes with most of the apps. And there's a lot less money in erotica than there is in mainstream. The best selling erotic author's I know are barely making 1-2K a month, and that's publishing a lot of the more intense smut, stuff that get's banned on Amazon and you need to publish on Smashwords.

Bottom line, there are a ton of ways that folks can use AI to scam and make money, but ultimately kindle unlimited (and i know a few six figure authors and even a few seven figure authors) isn't that big of a pool. It would be far easier to have AI, just spitballing here, being run on a local high end model, be tasked with creating bots that use google's own servers to mine crypto, and so on.


Author's using AI to speed up their process (spitballing ideas, outlining faster, being a far better and cheaper proof reader than a lot of content editors and so on) has a lot of potential though.

LionArcher fucked around with this message at 19:33 on Mar 16, 2023

Humbug Scoolbus
Apr 25, 2008

The scarlet letter was her passport into regions where other women dared not tread. Shame, Despair, Solitude! These had been her teachers, stern and wild ones, and they had made her strong, but taught her much amiss.
Clapping Larry

LionArcher posted:

Author's using AI to speed up their process (spitballing ideas, outlining faster, being a far better and cheaper proof reader than a lot of content editors and so on) has a lot of potential though.

This is what I've been using it for and it is fantastic. Writing a full novel? Nope.

Hammer Bro.
Jul 7, 2007

THUNDERDOME LOSER

Got some C-staff in the local office so I biked in early. My boss' boss came by and asked if anyone on my team could do a skunkworks Chat-GPT project because, presumably, it's a hot topic right now.

Then over lunch he introduced me to the founder of the company as the guy working on our AI projects.

I've been following this thread with interest for a while now; even played around with SD 1.4 though my computer can BARELY run it (can't even maintain SSH connections while it chugs). But I might have an opportunity for a corporate-funded "AI dev box" (or at least beefy graphics card) and that's got me excited.

I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets?

KinkyJohn
Sep 19, 2002

Fuzz posted:

Shamelessly sourcing training data from grey area stuff. People have recreated GettyImages and multiple Time magazine covers with it. The more data points the AI trains from, the more robust the generation is, though obviously a large part of that is also how they're indexing the files while training, with the right keywords and such. A lot of the stability stuff is end user training models, and the lack of heavy key wording on the training data means it's harder for the AI to parse small details in a prompt.

I'm also wondering if MJ "borrows" heavily from the SD camp and repurposes it to work in their system.

Every time there is a breakthrough with something SD related, it seems like it trickles down into MJ. Their model changed quite a lot after SD 1.4 got released. The new v5 also has features similar to SD's noise offset.

Is it also possible that they are using SD's community trained checkpoints and loras as sources?

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

Hammer Bro. posted:

I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets?

All the cores.


Seriously, GPT is more intensive than image generation.

KinkyJohn posted:

I'm also wondering if MJ "borrows" heavily from the SD camp and repurposes it to work in their system.

Every time there is a breakthrough with something SD related, it seems like it trickles down into MJ. Their model changed quite a lot after SD 1.4 got released. The new v5 also has features similar to SD's noise offset.

Is it also possible that they are using SD's community trained checkpoints and loras as sources?

There definitely is, but the MJ CEO has made it abundantly clear that he really doesn't give a gently caress about IP or copyright across the board. Apparently they straight up had some issues where it was generating approximations of actual medical images, complete with patient information on it which is a HIPAA violation. How that poo poo got in there in the first place is the real question, but after seeing a few interviews with the dude holy poo poo I will never give them any money ever.

Fuzz fucked around with this message at 21:28 on Mar 16, 2023

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


is there even a public text model? I know you can finetune GPT but I don't think any of this is local run. You are likely going to have to upload these documents to openAI's servers.

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

Hammer Bro. posted:

Got some C-staff in the local office so I biked in early. My boss' boss came by and asked if anyone on my team could do a skunkworks Chat-GPT project because, presumably, it's a hot topic right now.

Then over lunch he introduced me to the founder of the company as the guy working on our AI projects.

I've been following this thread with interest for a while now; even played around with SD 1.4 though my computer can BARELY run it (can't even maintain SSH connections while it chugs). But I might have an opportunity for a corporate-funded "AI dev box" (or at least beefy graphics card) and that's got me excited.

I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets?

Bleeding, bleeding edge stuff. Right now the only 'major' text generation thing with any attention is the leaked llama model, and there are some people beginning to fine tune them.

As for hardware, all of it. We're talking extremely high end NVIDIA professional cards, or multiple 4090s, as much computer ram as you can fit, best CPU, everything.

As was mentioned, text generation is among the hardest thing you can ask hardware to do right now.

Ruffian Price
Sep 17, 2016

pixaal posted:

is there even a public text model? I know you can finetune GPT but I don't think any of this is local run. You are likely going to have to upload these documents to openAI's servers.
GPT-2 is open source now and has had a few big finetunes done by volunteers, NovelAI runs on one

https://openai.com/research/gpt-2-1-5b-release

TIP
Mar 21, 2006

Your move, creep.



KakerMix posted:

Bleeding, bleeding edge stuff. Right now the only 'major' text generation thing with any attention is the leaked llama model, and there are some people beginning to fine tune them.

llama has some pretty strict licensing that doesn't allow commercial use and the available models are illegally leaked, so maybe not the best choice for a company right now

stability AI is supposedly going to be releasing their own text generation AI that should do for text what stable diffusion did for images, so that's something to look out for in the coming months

there are also various open models out there that can be played with like GPT neo but they won't be as good as llama on consumer hardware

however, if you're fine tuning towards something specific that might not be that big of a deal, it seems like the biggest advantage of the larger models is that they can handle all kinds of tasks they haven't been tuned or specifically trained on

I've seen impressive stuff done with fine tuned gpt2 level models

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

PlaysGamesWrong posted:

Counterpoint: gently caress those people. It's not a trick, and you aren't under any obligation to explain your work process.

There isn't a single game made today that doesn't use ai tools in some capacity, anyone telling you otherwise is either an idiot or intentionally lying to you.

If I'm looking at an image I would like to know if it's AI generated or not.

Tracing in the art world is perfectly accepted, tracing and claiming you drew it free hand is not. It's a similar thing here.

If you are generally proud of what you see as your work, why wouldn't you be honest about how it was created?

lunar detritus
May 6, 2009


Mega Comrade posted:

If I'm looking at an image I would like to know if it's AI generated or not.

...why?

cr0y
Mar 24, 2005



If you were looking for a deep learning build understand that a proper one is going to run you like $10,000 easy so before you even start find out if you have a budget.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!


Because it changes how I perceive and feel about the image.

There are artists who have created wonderful life like portraits that look like photographs at first glance, knowing how they are hand drawn changes how I feel and value the artists work.

It doesn't make their work greater or lesser than a photographers image, but it does change it.

porfiria
Dec 10, 2008

by Modern Video Games

Ai is built on stolen human work

lunar detritus
May 6, 2009


Mega Comrade posted:

Because it changes how I perceive and feel about the image.

There are artists who have created wonderful life like portraits that look like photographs at first glance, knowing how they are hand drawn changes how I feel and value the artists work.

It doesn't make their work greater or lesser than a photographers image, but it does change it.

I get that if you are in a museum but I honestly couldn't care less if texture #42 or sprite #4 in a video game is AI generated.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Hammer Bro. posted:

I'm probably using the lingo wrong, but if I wanted to train a locally-hosted text AI on a bunch of internal documents to see if it could "provide value" (we have so many different knowledge bases no one reads / can find), what kind of hardware should I be looking for (mostly GPU guidance and other common requirements)? And which text AIs (models?) could I both acquire legally and run locally? And does anyone have any links for training / refining them on local datasets?

I'm looking to train the image side of Stable Diffusion not text but there should be enough spillover to help you out.
All of the vram, like, literally, all of it. And then you probably need some more. I put together a 3x Intel Arc build on pcpartpicker as a joke and now it's not looking like so much of a joke, 48gb of vram is just a flat out ridiculous target. Traditionally there are data center/workstation cards like the Quadro RTX A6000 for training but the larget projects will cluster those together in 100s at a time and that's well outside the range of 'hobbyist'.
The real goal is matrix multiplication, being able to load a large set of numbers into memory, perform a multiplication on all of them, then get them out of memory for the next set to load then repeat.

I believe the next links are rather popular so you'll be able to find support for them without much digging:
https://pytorch.org/ for using python to either use or train a model in whatever way you want.
https://en.wikipedia.org/wiki/BERT_(language_model) a well documented popular natural language model
https://huggingface.co/ a giant repository of model files and more

I dunno if it's possible, but if you can get your boss to spring for something like this beast then go! Do it! Live the dream!

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

lunar detritus posted:

I get that if you are in a museum but I honestly couldn't care less if texture #42 or sprite #4 in a video game is AI generated.

Cool, good for you, but many people do. In the example of some stuff on a wall in a game I couldn't care less, it's background dressing in a much larger piece of work. But for individual pieces it matters to me.

porfiria posted:

Ai is built on stolen human work

While I agree that's an issue, it's with the companies driving the main generators, not the technology itself. There is no reason a generator couldn't be created on works with licenses that are either public domain or agree to be part of models.

And another reason stipulating how the image was generated is important, I'd be more comfortable enjoying images created on such models than the existing ones just as I am more comfortable buying products that are recyclable or fair trade etc.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

porfiria posted:

Ai is built on stolen human work

Everything is built on stolen human work. It's called Capitalism.

Tom Clancy is Dead
Jul 13, 2011

No one is quite sure what MJ's secret sauces are, but there are some things we do know. They have a massive amount of user feedback data that SD being open source without telemetry doesn't. Their text engine is something more advanced/specialized than the standard CLIP models. It seems very likely they're doing some sort of aesthetic grading on the training dataset they use.

KinkyJohn posted:

I'm also wondering if MJ "borrows" heavily from the SD camp and repurposes it to work in their system.

Every time there is a breakthrough with something SD related, it seems like it trickles down into MJ. Their model changed quite a lot after SD 1.4 got released. The new v5 also has features similar to SD's noise offset.

Is it also possible that they are using SD's community trained checkpoints and loras as sources?

Probably, why wouldn't they? But I think you have it backwards for offset noise - MJ was already displaying a far better range of bright, dark, and large compositional elements before the offset noise paper was published and let people do something similar in SD.

porfiria
Dec 10, 2008

by Modern Video Games

KwegiboHB posted:

Everything is built on stolen human work. It's called Capitalism.

I’ve accepted that I’m going to Hell, but I get that this particular evil may stick in people’s craws for various reasons. Like, I can’t listen to Michael Jackson anymore.

Hammer Bro.
Jul 7, 2007

THUNDERDOME LOSER

KakerMix posted:

Bleeding, bleeding edge stuff. Right now the only 'major' text generation thing with any attention is the leaked llama model, and there are some people beginning to fine tune them.

As for hardware, all of it. We're talking extremely high end NVIDIA professional cards, or multiple 4090s, as much computer ram as you can fit, best CPU, everything.

As was mentioned, text generation is among the hardest thing you can ask hardware to do right now.

Huh. Didn't realize it was harder than image-gen.

I doubt they'll spring for bleeding edge but maybe I can get one of the NVIDIA cards authorized. We tend to have pretty decent CPU/RAM situations due to the nature of our work.

Sounds like I might have to start off with less-intense GPT-2 for now; we definitely can't share these documents externally but we're not (officially) looking for short story generation.

Though when I asked ChatGPT to reveal to me some specific secret of Gene Wolfe's it errored out. Clearly the AIs aren't that advanced yet.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

porfiria posted:

I’ve accepted that I’m going to Hell, but I get that this particular evil may stick in people’s craws for various reasons. Like, I can’t listen to Michael Jackson anymore.

I get it too, even if I personally disagree with it I still understand enough people do honestly feel this way. That's why I'm parting out this computer, I want to create a Stable Diffusion model from scratch then start remixing it like crazy. It'll be an incredibly long time, multiple generations, before it produces anything that isn't just the worst avant garde nonsense color splashes... but I understand the math, there is a crossover point way off in the distance where eventually with enough compute power the outputs will match and then exceed todays current, without ever having 'seen' a thing.

That's as close as I can call an 'ethical' model. Then it will be up to the rest of the world to see what they do with it.

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

porfiria posted:

I’ve accepted that I’m going to Hell, but I get that this particular evil may stick in people’s craws for various reasons. Like, I can’t listen to Michael Jackson anymore.

Are you OK Annie?

r u ready to WALK
Sep 29, 2001

I wish all AI art was somehow permanently watermarked with the prompt that made it so that you could reliably ridicule the people who insist that their "greg rutkowski + trending on artstation" prompts are entirely unique and copyrightable and highly valuable, please buy my NFT

Adbot
ADBOT LOVES YOU

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

r u ready to WALK posted:

I wish all AI art was somehow permanently watermarked with the prompt that made it so that you could reliably ridicule the people who insist that their "greg rutkowski + trending on artstation" prompts are entirely unique and copyrightable and highly valuable, please buy my NFT

"Everything I don't like shouldn't exist" a classic argument. I'll get off your lawn now too.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply