Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Darko posted:

The issue with A.I. and general creative output is that artists/writers/musicians/etc. are going to go broke specifically because people by and large want products, not art. A lot of people don't see actual craft and skill in any art, they just want some nebulous "good" and want as much and as cheap as they can get it. As an artist, I can currently easily see where an A.I. is cobbling stuff together as opposed to even a mediocre digital artist is actually producing things, and am completely uninterested in having anything produced by an A.I. because it's missing human touch and the thought and mistakes and improvisation that comes from it. But as you saw from social media 6 months ago or whatever, people are just happy to finally have a portrait of themselves that they don't have to pay someone 1k plus for, because it's not about the craft. People by and large just view any art as a product and want to consume it and A.I. will feed them.

I think a lot of people want AI art because they have their own stories to tell and they don't have the time and skills to produce it on their own nor the resources to constantly hire an artist. The human touch also isn't absent, most AI art has a person who is consciously choosing what they want for an image. Just because they don't see much value in the artist's vision doesn't mean they don't care about what the art is representing. Also, let's be honest here, I doubt a lot of artists care about all their commissions either. They might care about certain pieces, but I doubt they care about D&D character #507 they were hired to draw except as a product to sell.

There is also nothing wrong with art being seen as a utilitarian tool either. There are plenty of situations where the art's purpose is to convey information, and has little artistic value.

I'm not going to deny there are some people that are just using AI to produce anime titty girls, but people have always been using art for porn. Its not really a flaw in society, but more a result of basic human drives.

AI art was also extremely lacking in tools to control the output at the start, so it was hard to put more meaning into AI art. Its gotten a lot better with technology like controlNet, but there is still a lot of room to improve.

I'm excited about AI because in the future its going to give people the opportunity to create things that usually could only be created by large groups of people or by working themselves to death. The technology is not quite ready yet, but I think it will lead to a massive boom in independent projects. Some might argue that its going to produce a lot of garbage, but that's the price we have to pay when we give people better tools to express themselves.

Adbot
ADBOT LOVES YOU

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Honestly, I think once it reaches the point an AI can replace programmers, its basically capable of replacing any job, except for jobs that strongly rely on social interaction. There might be a slightly delay for physical labor, but the tech is almost already there.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Count Roland posted:

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

AI being used to train AI is not the problem that people think it is. In fact, using AI to generate more training data is sometimes actually something desirable because you can better control the input. Midjourney for example uses RLHF (Reinforcement Learning from Human Feedback) to improve its model, and Stable Diffusion is going to release a model using the same technique. The controls used to gather the original dataset will work fine with a bunch of AI data, because even before AI, there was a lot of really bad data out there. (You can check the LAION database and search for a term, and see there is a lot of unrelated garbage.)

As for the full potential of AI, I see a future where anybody can create their own TV show, game, or movie if they are willing to put in the effort without working themselves to death or relying on a team. I don't mean somebody just types in a prompt and they just automatically generate one, but they focus on their strengths, and use AI as an assistant to generate other stuff. Then there is using AI with robots. I could see a future where people do have robot housekeepers. Then there is the medical advantages, with early detection of symptoms, and hopefully more accurate diagnosis.

My biggest concern is that getting the hardware/training data required for AI will require a large organization either government or corporate. I don't want a future where you have to pay a subscription for everything, corporations or government have firm control over what is acceptable or not, and its very easy for them to know everything about you. I have hope that the Open Source community won't let that scenario happen. Its been pretty good at keeping up with technological advancements even though higher quality LLMs are a bit hard to run on consumer hardware at the moment.

-edit VVVV Local Stable Diffusion doesn't put a watermark on the images for most GUIs, and its fairly simple to remove the watermark.

IShallRiseAgain fucked around with this message at 19:11 on Apr 1, 2023

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Main Paineframe posted:

I'm not talking about obstructing business at all. I'm talking about big business having to pay for the stuff they use in their for-profit products, just like everyone else.

If that makes generative AI uneconomical, then so be it. But when I say AI companies should respect the rights of media creators and owners, I'm not saying that as some secret backdoor strategy to kill generative AI. I'm just saying that giving AI companies an exception to the rules everyone else has to follow is bullshit. Business shouldn't get an exception from rules and regulations simply because those regulations are inconvenient to their business model. And that's doubly true when it's big business complaining that the rules are getting in the way of their attempts to gently caress over the little guy.

It's silly to say that any individual piece of text has no "influence" on ChatGPT. Clearly the text has some impact, or else OpenAI wouldn't have needed it in the first place. And if OpenAI needs that text, then they should have to pay for it. Trying to divine exactly how much of a part any individual item plays in the final dataset is a distraction. It played enough of a part that OpenAI put it in the training data. If OpenAI is using (in any way) content they don't have the rights to, they should have to pay for it (or at least ask permission to use it).

Laws, rules, and regulations make plenty of products impossible to market, and they make plenty business models essentially unworkable. If AI trainers can't figure out a way to train AIs in a manner that complies with existing law, that's their problem, not ours. I don't see any reason that we should give them an exception. My ears are deaf to the cries of corporate executives complaining that regulations are getting in the way of their profit margins. And anyway, you're thinking of it exactly the wrong way around. It's definitely possible to make generative AIs with copyright-safe datasets - again, this isn't some sort of backdoor ban. However, it's only practical to do so if the law is enforced against the companies that don't use copyright-safe datasets! Otherwise, the generative AIs trained on bigger and cheaper pirated datasets will inevitably be more profitable than the generative AIs trained on copyright-safe datasets.

The sticker price of ChatGPT is irrelevant. It's a for-profit service run by a for-profit corporation that's currently valued at about 20 billion US dollars. Stability AI is fundraising at a $4 billion valuation. Midjourney is valued at around a billion bucks. There's no plucky underdog in the AI industry, no poor helpless do-gooders being crushed by the weight of government regulation. It's just Uber all over again - ignore the law completely and hope that they'll be able to hold off the lawyers long enough to convince a big customer base to lobby for the laws to be changed.

I think AI falls under the fair use category, its pretty hard to argue that what it is doing isn't transformative. There are super rare instances when AI can produce near copies, but this isn't something desirable and stems from an image being over-represented in the training data.

Also, the requirement to have a "copyright-safe" dataset just means media companies like Disney or governments will have control over the technology and nobody else will be able to compete. They have the rights to more than enough copyrighted content that they could build a pretty good model without paying a cent to artists. The only people that will be screwed over is the general public.

Although hopefully since the models are out there, no government will be able to contain AIs even if they crack down hard on them.

IShallRiseAgain fucked around with this message at 23:23 on Apr 4, 2023

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

StratGoatCom posted:

AI content CANNOT be copyrighted, do you not understand? Nonhuman processes cannot create copyrightable images under the current framework. You can copyright human amended AI output, but as the base stuff is for all intents and purposes open domain, if someone gets rid of the human stuff, it's free. The inputs do not change this. Useless for anyone with content to defend. And it would be a singularly stupid idea to change that, because it would allow a DDOS in essence against the copyright system and bring it to a halt under a wave of copyright trolling.

Stop listening to the AI hype dingdongs, they're as bad as the buttcoiners.

I'm not saying anything about it being copyrighted? I'm saying that AI generated content probably doesn't violate copyright because its transformative. The only reason it wouldn't is because fair use is so poorly defined.

I agree that AI images that are only prompts should not be copyrightable, because of the potential issues with that. It only takes a little bit of effort to generate AI content that is actually copyrightable.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

StratGoatCom posted:

Machines should not get the considerations people do, especially billion dollar company backed ones. And don't bring 'Fair use' into this, it's an anglo wierdness, not found elsewhere.

Its people using AI to produce content though. Like I said before its not an issue for large corporations or governments. They already have access to the images to make their own dataset without having to pay a cent to artists. Its the general public that will suffer not the corporations.

Also, Fair Use is essential when copyright exists otherwise companies or individuals can suppress any criticism of the works they produce. Japan is a great example of this, and tons of companies regularly exploit the fact that Fair Use isn't a thing.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Main Paineframe posted:

If Disney makes an image generation AI, I certainly wouldn't expect them to license that out to just anyone. They'd guard that as closely and jealously as possible. If Disney builds a machine designed exclusively to create highly accurate and authentic images of their most valuable copyrighted characters, they're not gonna let anyone outside the company anywhere near it.

https://www.gofundme.com/f/protecting-artists-from-ai-technologies

It's a GoFundMe by the Concept Art Association, an advocacy org for film industry concept artists which is dedicated to protecting their interests from both the movie execs and outside threats.

One of the things they intend to spend the money on is a Copyright Alliance membership. So an AI "artist" who thinks the opposition to AI art is "facist" posted some carefully cropped screenshots on Twitter to falsely portray the Copyright Alliance as an organization catering exclusively to the interests of major media companies, and suggested that the entire thing was just a "psyop" by corporate stooges acting in Disney's name. It went viral, naturally, and variations on it got circulated all around by AI art supporters.

There isn't actually anything to suggest that there's anything shady about the fundraiser at all, as far as I've seen. It's just something that AI art users made up and circulated around to try and deflect the near-unanimous scorn of real artists away from themselves. Since basically no one ever double-checks anything they see in a screenshot attached to a tweet, it was fairly effective at muddling the waters.

I think I'm just fine with money going to the Authors Guild, the Screen Actors' Guild, the Directors' Guild of America, the Graphic Artists Guild, the Independent Book Publishers Association, the Association of Independent Music Publishers, and Songwriters of North America, which are just some of the numerous unions, trade associations, and artists' rights groups on that page. Yeah, Disney and a few other big wealthy companies are on that list of members, but so are an absolute fuckton of organizations dedicated to defending the rights of individual creators against those very same big businesses.

It's worth noting, however, that the only money that fundraiser is sending to the Copyright Alliance is for paying its own membership fees and sponsoring other artist rights' groups to join. The lobbyist isn't being hired on the Copyright Alliance's behalf - the lobbyist will work directly for the Concept Art Association.
That uh doesn't actually change anything. Like sure there a lot of unions, trade associations, and artists' rights groups that are part of it, but that doesn't change the fact that companies like Disney are on it too. It makes it a bit more unclear about the actual motives, but it doesn't change the fact that companies which hold a lot of rights to art see some advantage to advocating for it. Also, these unions and other artist advocacy groups are pretty much forced to fight for this because its members very much want this even if they don't understand the full implications of what would actually happen. I don't think their members would care or believe it if they tried to explain the actual probable consequences of this going through.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

SaTaMaS posted:

The thing keeping GPT from becoming AGI has nothing to do with consciousness and everything to do with embodiment. An embodied AGI system would have the ability to perceive and manipulate the world, learn from its environment, adapt to changes, and develop a robust understanding of the world that is not possible in disembodied systems.

I think a true AGI would at least be able to train at the same time as its running. GPT does do some RLHF, but I don't think its real-time. Its definitely a Chinese Room situation at the current moment. Right now it just regurgitates its training data, and can't really utilize new knowledge. All it has is short term memory.

Although some people have made an extremely rudimentary step in that direction like with this https://github.com/yoheinakajima/babyagi.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

cat botherer posted:

ChatGPT cannot control a robot. JFC people, come back to reality.

https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/chatgpt-for-robotics/

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

eXXon posted:

Do you have sources for this?

Is there any reason not to be especially skeptical of these kinds of corporate hype pages with unreviewed tech notes (like the "sparks of AGI" paper?

The only part of that paper that I found rather compelling (assuming they weren't just cherry-picked examples for a tailor-made training set) is the ability to parse longer lists of instructions to generate a specifically-formatted plot. But I still can't equate parsing text in some subset of problems with understanding or intelligence.

I was merely replying to that specific post. I didn't say anything about it needing to understanding what it is doing or having intelligence. You don't need that to control a robot.

With my own experiments with ChatGPT, I am making a text adventure game that interfaces with it, and I don't even have the resources to fine-tune the output.

IShallRiseAgain fucked around with this message at 01:35 on Apr 9, 2023

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

mobby_6kl posted:

I doubt that anyone can answer that for certain right now because all of this is up to interpretation in court.

As you say, the models use a lot of copyrighted data for training. So the first question is whether or not this counts as fair use.

The models also retain some of the training data, in compressed form, in a way that can be reproduced fairly close to the original, e.g.:





Does storing and distributing this count? How much is the company vs the user's responsibility? I dunno. There's a pretty good article on this here:

https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/

which also has an interesting comment about contributory infringement that I'm not going to try summarizing:

copyright infringement not intended :D

While it is possible for stable diffusion to produce near identical outputs, this is very much an outlier situation where duplicate training images are way over-represented in the training data. Its also unlikely to happen unless you are already setting out to make a copy of an image. This is actually harmful to the model that nobody wants and is something that is fixable.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Raenir Salazar posted:

From a game design standpoint it'll be really interesting to see if anyone tries to revive the "type in instructions" subgenre of adventure games. Using AI to interpret the commands more broadly.

At a minimum if it's supports speech to text, might lend for more accessible games?

I'm working on that right now. https://www.youtube.com/watch?v=uohn5o0Cgpw

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

gurragadon posted:

This looks fun, definitely a rainy night kind of game that I would play instead of reading creepypastas. Keeping it contained to a haunted house might keep the AI under control a little bit. I want to see how many ways I can die in this mansion. How far along are you and do you have any goals for it or just kind of leaving it open ended? Either way would be cool, I think. Does the AI generating the text make it easier or harder for you to make the game?

So basically the idea is that its much more controlled than just asking chatGPT to simulate a text adventure game. The conversation basically resets for each pre-defined "room", and the game will handle being the long term memory for the AI. There are set objectives including an end objective, but multiple ways to achieve them.

I'm making progress, but there is still a lot of writing on my end. The initial text for each room will be written by me, and I also establish rules for each room.

Adbot
ADBOT LOVES YOU

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Jaxyon posted:

My company already announced they've assigned a VP to start looking at how we can use AI to eliminate jobs.

It's not said like that exactly, but if you translate the corporate speak they're right out in the open with it.

I've got a good target

quote:

[CEO - Monday 9:00 AM - Executive Meeting]

CEO: Good morning, everyone. Let's begin our weekly executive meeting. Today, we have quite a few items on the agenda, some of which may seem redundant, but it's crucial that we discuss these topics in detail.

[CEO - Monday 11:00 AM - Department Heads Meeting]

CEO: Thanks for joining me for this department heads meeting. I want to go over some of the same topics we discussed in the executive meeting to ensure everyone is on the same page.

[CEO - Monday 1:00 PM - All-hands Meeting]

CEO: Welcome to our all-hands meeting. I'm sure you're all eager to get back to work, but I want to ensure that everyone is up to speed with the latest company developments.

[CEO - Tuesday 10:00 AM - Project Falcon Review]

CEO: After reviewing Project Falcon, I've decided that we will be discontinuing it, effective immediately. I understand the team has been working hard and achieved great results, but we need to focus on other projects that align more closely with our strategic goals.

[CEO - Wednesday 9:00 AM - HR and Finance Meeting]

CEO: I've been looking at our numbers, and it appears that we need to make some difficult decisions. We'll be laying off 10% of our workforce across various departments. I understand this will be tough, but it's necessary for the long-term health of the company. On a related note, due to my efforts in securing a major client, I will be receiving a performance bonus.

[CEO - Thursday 2:00 PM - Project Updates Meeting]

CEO: As we wrap up this week, let's have another meeting to discuss the status of all our ongoing projects. I want to make sure that everyone is aware of any changes and that we're all working towards the same goals.

[CEO - Friday 3:00 PM - Weekly Recap Meeting]

CEO: To close out the week, let's recap everything we've discussed in our various meetings. It's essential to keep everyone informed, even if some of the information may seem repetitive. Communication is key, after all.

[End of Simulation]

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply