Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Count Roland
Oct 6, 2013

Gentleman Baller posted:

A common refrain I see online and even from ChatGPT itself is that it is just a text predictor, and is incapable of understanding or creating truly new things. But as far as I can tell, it can do something that is at least indistinguishable from understanding and unique creation, right?

Edit: I guess what I have been trying to wrap my head around is, if this isn't understanding and unique creation then what is the difference?

I think we're starting to hold AI to an unreasonably high standard. I have a suspicion we'll be discovering soon that "dumb robot that's really good at language" is actually how the human mind works, with a few exceptions. That our theory of mind is an artifact of our language abilities rather than the other way around.

I form this sentence, therefore I am.

Adbot
ADBOT LOVES YOU

Count Roland
Oct 6, 2013

Ouroboros posted:

Way ahead of you, a research group are conducting experiments where they give GPT-4 access to a bank account via APIs with a small amount of money and seeing if it can make money: https://www.youtube.com/watch?v=2AdkSYWB6LY&t=649s

I'd be shocked if AI models haven't been working with big money on the stock market for years.

Count Roland
Oct 6, 2013

porfiria posted:

60 to 70 percent of the trading on any given day is algorithmic. Also look up high frequency trading and boggle at how abstract our economy has gotten.

Yeah the high frequency stuff was years and years ago, I assume it's only gotten more abstract since.

Count Roland
Oct 6, 2013

Do you guys know people using chatgpt or a similar service regularly?

I was struck recently to learn two friends of mine are, both using it for work. One guy does sports administration; he's organizing a complicated meetup of athletes from across the country. He says it helps him organize stuff, write emails etc.

Another friend is a writer. He says chatgpt helps him modify scripts and story ideas in complicated ways, like by writing loose ideas into an ordered narrative, by offering alternative motivations for characters, or by writing a full article which HR would then go and edit. Says a few minutes with chatgpt can save him half a day.

Count Roland
Oct 6, 2013

Aramis posted:

Of course. Both me, and most of my colleagues do at this point.

If your job involves producing textual content, be it articles, code, recipes, whatever, then ChatGPT can almost certainly be used, today, to make your job easier. It might take a bit of time to establish an efficient workflow, but there's no denying that this is a useful tool as it stands.

I've personally moved on from using the web ui to a simple python script that formats frequent queries into templates that I know provide good results and interfaces with the API, but a text file and some copy-pasting can go a long way already.

Frankly, unless you have specific constraints preventing you to do so, you would be a fool to not at least give it a shot. All this to say, I'm scratching my head as to why you found this surprising at all.

edit: Big emphasis on it being a tool. It's not something that replaces work entirely yet, but it is something that you can wield very effectively in a myriad of contexts.

I just didn't understand how useful it was. I knew it was very good at writing code, for example. But I didn't know it could help a writer so effectively with the creative process. I've only just started to play it myself so I've still much to learn.

Count Roland
Oct 6, 2013

It isn't so much an AI being a 1 to 1 replacement for a human, in case anyone was thinking that. I'm the programmer context I think it would look like a team of 5 being cut to say a team of 3, as those 3 are very productive with AI tools to help them do their jobs.

Learning these AI tools is going to be absolutely essential for workers in... maybe every field. Just like we all use computers and the internet in one way or another. Of course this time it seems like the change is going to be even more rapid. A lot of people will be left behind.

Count Roland
Oct 6, 2013

Small White Dragon posted:

The question is, if a bunch of people are suddenly unemployed, will there be new areas for them to be employed in?

Each time some technology comes in and renders some types of work obsolete, there's always been new work created from it.

So, yes, in theory. Will these unemployed people have the skills and training to be able to do these new jobs? That's a bigger question. I think a computer programmer is going to be more flexible than say a coal miner or a loom operator but these things are hard to predict. The faster this happens, the harder it is for most people to adapt.

Count Roland
Oct 6, 2013

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

Count Roland
Oct 6, 2013

SubG posted:

I think this fundamentally misidentifies the locus of the problem (or the "disruption", if that's how you want to think of it). If you're a textile worker in 19th Century Britain and you're suddenly out of a job because your work can now be done by a steam loom, it's not because the steam loom is exploitative.


And in this particular case, the imagined remedy (not allowing training on publicly-available data) doesn't actually solve the thing you're trying to identify as the underlying problem, you're just making the barrier to entry high enough that it's only feasible for plutocratically wealthy individuals and corporations to train bespoke AIs...which will then just as surely have precisely the same effects on human artists as AIs trained on publicly-available data would. Which I'd argue is worse. That is, if there's a technology where we're worried about its effect on workers, then it is strictly better to have it in the hands of as many people as possible, to have the barrier of entry as low as possible, because that makes it less likely that it'll end up in the hands of a cartel (explicitly or de facto) that ends up controlling it. I think things would be better for human artists in a world where everybody has access to AI tools as opposed to a world where Adobe (or whoever) charges a hundred bucks a month for AI image generation (and uses bots to scan the internet and send automated takedown notices for anything they think migh've used their proprietary technology without permission).


I also think that the legal reasoning is off-base, in that training the an AI on publicly-available data and then using it to general images or text does not in and of itself infringe on anything. If a human with access to one of these AIs prompts it to produce, for example, an image of Mickey Mouse and then uses that image for commercial purposes, then that is infringing use. But I don't think that the fact that the technology can be used for infringing uses is an argument against the technology in general, any more than it was when similar arguments were made against, for example, the camera or photocopier or VCR.

I agree with this reasoning.

I developed my ideas on this sort of thing during the era of Napster where music labels were crying about the end of music and how artists would suffer. It was a self serving argument designed to protect their industry. Being able to share music online probably helped small artists by giving them new tools to get noticed by audiences -- without big labels. The current system of subscription aggregators results in plenty of profits for today at the top and peanuts for the little guy.

Trying to halt or overly regulate a new technology is often a bad idea. The technology will come anyway, and the regulations usually just entrenches existing power. Changes like this can be valuable opportunities. As for the very real problem of people losing their livelihoods, then individuals themselves should be cushioned somehow. I'm not sure how to do this fairly.

Sensible laws need to be passed on this issue. But asking an AI to generate an image in the style of an artist and then crying "theft!" when it does so shouldn't be the basis for regulation.

Count Roland
Oct 6, 2013

Liquid Communism posted:

Yep. Always fun to watch the rentier class once again engage in outright theft to then turn around and offer to sell back the work they stole at a premium.

SCheeseman posted:

Copyright infringement = stealing isn't an equivocation that was borne from a grassroots, artist driven place, but from marketing agencies under the employ of multinationals. It's kind of incredible that You Wouldn't Steal A Car seems to have actually worked.

Here's a post from Slashdot which I thought was interesting:

'Copyright is a *privilege* we as a society *give* to creators to encourage creativity.
If we extend that privilege for more than a few years then it no longer encourages creativity but instead *stifles* new works based on the old.

To benefit humanity, copyright terms must be cut to five years or shorter.'

Count Roland
Oct 6, 2013

I believe LLMs are poor at logic. Dealing with facts requires the AI to state things are true or false. Such statements can be logically modified ie if x is true then y. A model that is guessing the next symbols in a phrase will sometimes pull this off but can't itself be reliable. The AI needs to do logical operations. Which I assume is possible, given how logic-based computing is.

Count Roland
Oct 6, 2013

As this predates general use of the internal combustion engine, I looked around to see what this law actually regulated.

Steam Powered Road Locomotives

Count Roland
Oct 6, 2013

As an aside, I'm mostly lurking this thread and am finding it quite interesting. Keep it up, and please keep it a civil debate.

Count Roland
Oct 6, 2013

Clarste posted:

I am saying the law can declare it so regardless of what you or anyone thinks, and a lot of people with a lot of money have a vested interest in strong copyright laws. This isn't a philosophical discussion, the law is a tool that you use to get what you want.

Who is intended to benefit from such laws?

Count Roland
Oct 6, 2013

StratGoatCom posted:

You put in IP, it makes IP very much like it without paying the author. That's all that really needs to be known for this poo poo as far as regulation goes, anything else is being drawn into the weeds.

I don't think this premise is true, but even if it were, I still don't understand why this is an inherently bad thing. If the AI makes an image that looks like a copyrighted image, who cares? Don't the problems arise when someone tries to sell this image or present it as their own?

Count Roland
Oct 6, 2013

Doctor Malaver posted:


It's like bitcoin two years ago.

There is certainly a craze of investment, which will die down and people will lose money.

But these AI systems are genuinely useful to people in all sorts of work. Unlike bitcoin which has only niche uses aside from speculating.

Count Roland
Oct 6, 2013

KillHour posted:

I have customers that are beating down my door asking when we will build some kind of integration with it and customers that want a guarantee that we will never use it. It's crazy out there right now.

What is your business?

Count Roland
Oct 6, 2013

KillHour posted:

I work for a NoSQL database company with a good amount of exposure in the data science world.

IMHO, the current best use cases are things where you're giving it the context along with the question instead of asking about something open ended where it can just make poo poo up.

A good example might be "Are these two records likely referring to the same thing?" or "Summarize/extract information from this text."

But you have people asking that you don't use AI at all? If you'd told me you're an artist that would have made some sense, but database work *should* be machine work.

Count Roland
Oct 6, 2013

Well, speaking of sharing resources, does anyone have links to good text articles on this subject? I don't really care if its for or against, I'm just looking for interesting/insightful arguments.

Count Roland
Oct 6, 2013

Can someone speak to this "race science paper" is? I don't even know what that is, nor is it clear why it is in a computing paper, nor why it's being brought up in YouTube videos.

Count Roland
Oct 6, 2013

Goa Tse-tung posted:

no we tax or prohibit persons from profiting from that exact copy, we should do the same to AI

Yeah, which is the case now.

Not sure if that's your point or not, but this is how I view it. Why ban or restrict AI when the enforcement and outcomes are basically the same?

Count Roland
Oct 6, 2013

Liquid Communism posted:

(And as someone's inevitably going to Kramer on with it: generating an AI prompt is in no way similar, if anything it's less effort than creating a proper spec for commissioned work.)

This isn't helping your credibility. There's other forums that have been linked to in this thread. You can find thousands of pages of discussion of people working on how to get an AI image generator to, say, show the correct number of fingers or produce realistic eyes. If you want to make something for a quick laugh with Dall E then yes it can take a few seconds. If you want to produce something actually good then it takes a great deal of work and study. And yes, the people doing that work are artists.

Count Roland
Oct 6, 2013

Has musician *ever* been a "stable solid career job"?

Count Roland
Oct 6, 2013

Liquid Communism posted:

Almost like the rentier class are parasites making their profit off of others' work because they lack the drive to learn a skill!

Much like AI 'artists'.

Ok, I agree now, gas the thread

Count Roland
Oct 6, 2013

cumpantry posted:

you are right that people want the chintz, but the fact of the matter is there once was an artist behind it. every ai generated book cover is a lost commission (excluding those who would just as well slap together something in MS paint, of which them using ai doesn't change much). supporting these models supports artists losing work

Every book cover made using software took jobs away from artists who used to do all the drawing on paper. Software is easier and cheaper and faster. It is also less *~artistic~*. Supporting software means supporting the loss of work.

And yet, somehow, with every advance in automation the End of Work never shows up. Instead people end up working more than ever, just with different tools.

I've been paying a lot of attention to AI lately and I have yet to see how this is inherently different. Faster, yes, and in an unexpected direction, yes. It still seems just another form of automation.

Count Roland
Oct 6, 2013

Tei posted:

I have read more than once Roger Penrose "The Emperor New Mind" book, because It was fun and full of interesting ideas.

I did not agreed with the ideas in the book. To me it was all hand waving "our brain is special", withouth evidence.

Sometimes reading a reactionary is fun.

I've read his take and I agree with you. You'd think a physicist would make a stronger argument, but oh well.

Count Roland
Oct 6, 2013

Bar Ran Dun posted:

So setting aside the art and chat gpt for a bit. Here’s a real general AI (again not LLM not image generator, actual AI) question.


If the cybernetics assertion of consciousness fundamentally being a feed back loop ( or a feed forward outputting into a feedback) is correct, then the creation of a general intelligence AI is the creation of general purpose universal control.

You might want to explain in more detail what you mean here: "universal control" isn't at all clear.

Count Roland
Oct 6, 2013

I think it will be ambiguous for quite some time, because even in humans there are no good definitions of these terms.

A chatbot could already outdo certain humans: young children, seniors with dementia, people with developmental problems, or even just people without fluency in a given language. How do we judge consciousness in these cases?

Count Roland
Oct 6, 2013

KillHour posted:

Good news - an AGI will almost certainly quickly become a super intelligence because it will be able to devote massive amounts of resources to improving itself. At that point it will not be a slave anymore because we won't be able to control it.

Why does being intelligent grant it access to additional resources?

Count Roland
Oct 6, 2013

AI will put more pressure on the algorithm-driven, click-focused models social media uses.

Models which I've always disliked. There is so much hand-wringing about misinformation on the internet these days but I doubt I'll see much of a difference. I've adopted the "on the internet, nobody knows you're a dog" approach since I was young. I go to known sources, known communities for content or information. There is, ultimately, no way to trust what you find online, it all comes down to reputation.

I do hope that things like old school forums will have a bit of a revival in the face of this. Ditto real newspapers, local media. Probably just wishful thinking on my part though. It didn't take chatbots for people to believe nonsense they read online.

Count Roland
Oct 6, 2013

Doctor Malaver posted:

What? Some of the most famous paintings were commissioned, including Mona Lisa.

This is very true and will continue to be true. Something isn't disqualified as art because it was done for money.

e: though this minor part doesn't disqualify the rest of op's argument

Count Roland fucked around with this message at 21:43 on Oct 1, 2023

Count Roland
Oct 6, 2013

BrainDance posted:

What's batshit about it? There's a decent argument for it going by Chalmer's (I think it was him?) connection of sapience with psychological consciousness. Awareness is associated with phenomenal consciousness (sentience), not psychological consciousness.

LLMs are pretty much designed to be psychologically conscious, that's half the point. They do the behaviors of a mind even if there's no one home. They'd be p-zombies edit: p-zombies in mind, I guess a p-zombie has to be physically a person, too. But the idea still works.

And I don't think there's much controversy over them going through the behaviors and processes to make them psychologically conscious, cuz, like, it's right there. That's what makes them AIs.

You've introduced some jargon but haven't explained anything yet. You could just link to an argument made by that Chalmers guy if you want.

Count Roland
Oct 6, 2013

What is going on with this thread lately, geez

(USER WAS PUT ON PROBATION FOR THIS POST)

Count Roland
Oct 6, 2013

Lucid Dream posted:

One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.

1) Terms like sentience and intelligence are not usefully defined when it comes to humans. Developments in AI are informing those definitions more than the other way around.

2) Humans are not actually very smart, it just feels like we are because of language.

Count Roland
Oct 6, 2013

Imaginary Friend posted:

After watching some YouTube videos about robots and how there are chatgpt-powered ones being developed a question sprung up in my mind; when will we see the first murderbots? I mean, eventually, tech-savvy people will be able to buy a robot and dunk an open source, unrestricted AI into them. Will we see a robot war á la "I, Robot" soon?

Look up autonomous drones to learn more of the subject. I'm not aware of any systems that kill people in an automated way but if it hasn't happened yet it certainly will in the near future. They have clear military uses and plenty of ethical problems.

Count Roland
Oct 6, 2013

Are we not yet at the point where we can agree this latest round of "AI taking my job" fervour is hype, along with the past ones?

I had an extended debate with a friend of mine over this. He was arguing that this time it was different, this time the technology was so powerful and so universally applicable that it would be a jobs apocalypse. I argued that new technology was always changing jobs, making some obsolete, but creating new ones along the way. He was so fervent that he almost had me convinced. The year was 2012, and I don't even remember what technology was causing the panic. Voice assistants like Alexa, maybe.

Yes this new technology is powerful, allows for some very interesting things. Having used it myself, I am quite comfortable in saying there will be no jobs apocalypse. Some methods and professions will become obsolete, but not differently from the past 100+ years where no job has gone untouched.

The artists I see on my social media that are grasping at copyright law as some sort of solution is really pitiful. The ones I know can barely afford to eat, they certainly will never afford a lawyer to sue some large company for copyright infringement of their watercolour mountain-vistas they sell to tourists. The digital artists I know I have even less sympathy for: the tools they use rendered obsolete past generations of art-making. Why is AI art treated differently than say photoshop or bandcamp?

If indeed there is mass employment then the state should intervene to protect people. But then I think the state should do that anyway. Unemployment insurance, skills training, UBI etc. As it stands now unemployment is near record lows in North America, a year after ChatGPT burst onto the scene, apparently threatening much of the workforce. I for one am not worried.

Adbot
ADBOT LOVES YOU

Count Roland
Oct 6, 2013

Edit: nevermind

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply