Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

reignonyourparade posted:

This is true but also the same could be said about a not negligible amount of humans when it comes to horses.

Can it? Even if you have never seen a horse you have likely seen another animal that is similar. It's how we often describe things 'x is like y' to further understand, as at some point you will hit a reference a human has a personal experience or perception of. Is that comparable to a LLM?

Adbot
ADBOT LOVES YOU

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Yeah I use it as a stack overflow replacement. Sometimes it's very helpful, sometimes it's rubbish and trying to coax the right answer takes longer than just a Google search.
But I'm very careful to never just paste anything in directly.

The amount of people probably pasting company IP into these things without questioning it has gotta be staggering.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Depends on the model and interface. Some say they might analyse your inputs, others claim they wont.
There are many models you can now download and run yourself locally if you want complete control. They won't be the state of the art ones though.

Adobe has created a 'art' generator that is from public domain and stock images they own the rights to, assuming it was how the models were trained that is the main issue you have.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

gurragadon posted:

Anybody who is training to be a writer will train themselves on copyrighted works...

I've seen this comparison before and ones like it and it seems to ignore that we have a lot of existing things that people can do, but companies cannot. Especially when it comes to copyright and licensing.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
New concept or not, it's an individual vs a profit motivated multi billion dollar backed company. I find it weird to compare them.

If you use a picture of Mickey mouse to help train yourself it's allowed. If a company put Mickey mouse in their training material for employees, it's not allowed. They have to pay a license for that.

And yes much of copyright is dumb and has been highjacked by huge corporations to protect work long after the original creator has died. But it's also used to protect every living creator and their livelihoods.

Mega Comrade fucked around with this message at 19:56 on Apr 4, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
There have been lots of technologies that didn't take off because they would be prohivitely expensive to run.

Why should society single out LLMs as an exception to this common occurrence?

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

cat botherer posted:

There's nothing special to the GPL over restrictive copyrights here. The GPL itself holds up, but they are claiming less rights than, e.g. Nintendo is with Mario. The only novel thing is it being about code instead of movies or books, but it is still generative and does not duplicate copy-written code. It's pretty hard to see how a court would side against Microsoft while maintaining previous decisions.

It can duplicate copy-written code. Github has admitted as much, but it only does it 1% of the time. It's in the FAQ.


GitHub faq posted:

Our latest internal research shows that about 1% of the time, a suggestion may contain some code snippets longer than ~150 characters that matches the training set.

It will also funnily auto fill the names of very prevalent GitHub authors in doc creation sometimes.

Mega Comrade fucked around with this message at 09:59 on Apr 5, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Well we got to 8 pages. A good run.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Doctor Malaver posted:

It would be helpful for objections to AI to be specific and free of padding. For instance, why complain about "false text" flooding the internet? How's that going to be different from now? Are we in the end days of truthful internet, free of conspiracy theories, money scams, hate speech..?


No one is arguing AI is inventing this stuff. Just that it lowers the bar and allow much more to flood the internet than ever before.
Clarks world shutting entries is an early look at the new issues these systems are going to bring.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Fridman also claims he's an MIT lecturer. Which is one of those 'technically true' statements. It's very common for people to give 'lectures' as an open forum discussions that people can walk into. But he plays it off like he teaches there. A lot of his claims to expertise are like this, small nuggets of truth wrapped in bullshit.

He's a grifter and an awful podcast host, his voice is so incredibly boring and he asks very dumb questions. I have no idea how he manages to get so many high quality guests or so many listens.

He's the Joe Rogan of tech.

Mega Comrade fucked around with this message at 10:06 on May 3, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
It's not like we don't know how to solve climate change, just we don't want to do it. An AI super intelligence wouldn't change that.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

GlyphGryph posted:

No, and that's a stupid loving conclusion from reading what I wrote. It's got nothing to do with destroying capitalism.

What I will criticize is the morons who want to make things worse, forever, for other artists and themselves and culture as a whole, in a desperate attempt to claw a couple more years of stability out of their own career in a scheme that's unlikely to even loving work.

None of the proposals I've seen from artists angry about AI right now have any interest in the public good. All of them empower corporations at the expense of individual creators. None of them even meaningfully reduce the career risk posed by AI! It's all insanely dumb panic poo poo, and almost all of it seems to be based on a sense of entitlement and wounded pride than practical concerns for the well-being of the public or artistic community - the same sense of entitlement and wounded pride that has been used countless times in the past to convince artists to get on board with the nightmare copyright regime that already stands.

Those entitled artists, trying to deprive me my toys :argh:

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SCheeseman posted:

It's a shame so many artists have ended up getting redpilled by the copyright lobby, regurgitating their corporate propaganda, playing their game and positioning the "tech bro" or big tech in general as the enemy when, christ, they're the same loving people at the top. The only losers are the public who will have a means of production taken for them and locked away behind a paywall, with a copyright system strengthened to protect the rights of IP holders, which aren't necessarily (and often not) artists. What kind of fool wants there to be a requirement to show a chain of work? That's a loving nightmare scenario!

I could change about 5 words in this and make it a pro crypto spiel.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SaTaMaS posted:

The disruption isn't just people getting laid off due to AI, it's also jobs going unfilled due to being unable to find people with the necessary AI skills, which seems to be the bigger problem at the moment.

At least in my field a lot of stuff that claims it needs AI skills do not. Prompt engineering is just laughable as a skill, it's not hard to learn and any of the difficulty that currently exists will be phased out as tooling improves. 99% of Engineering roles requesting AI skills are also mostly silly as all you are really doing is integrating into an API which any engineer worth their salt can do.

The real skills will be in creating and training custom models, and I mean real training, not what KwegiboHB thinks they are doing.

Mega Comrade fucked around with this message at 09:30 on Oct 18, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Has bing ever been profitable? A bing AI search is as much as 10x as expensive to do.

How does that model get to profitability?

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
What if you get 10 lovely outputs, like I did yesterday because I couldn't remember the syntax on a datapager.

I had to go to stack overflow! What is this 2021!?

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
We've had these sorts of technologies for ages, there are examples that exist in the wild

https://youtu.be/ZNVuIU6UUiM?feature=shared

The issue is the mechanical side more then the AI. They are big and expensive to run and maintain and often have only one use. A under paid worker might not be as effecient but they can do multiple tasks given to them.

And it's not like machinery hasn't been reducing the need for humans for farmwork for the last few centuries.


There has also been a burger flipping robot for ages https://youtu.be/KJVOfqunm5E?feature=shared

Apparently it breaks down constantly and is a loss leader

Mega Comrade fucked around with this message at 15:31 on Nov 30, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!


This could do a better job

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SaTaMaS posted:

It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.

It's not hallucinations that prevent LLMs from being considered intelligent.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
It's not like scientists had these hard and fast rules that they just decided didn't matter anymore when LLMs dropped.
They have been debating this stuff for over 50 years. The Turing test for example was known to be flawed almost since it was penned, but it's kept around as a useful marker.(the public know and understand it so it's handy for publicity)

ie an intelligent AI will have to surpass this. But that doesn't me it's a marker for intelligents in itself.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
That's a massive giant leap and a half in logic and a misunderstanding of what "hallucinations" in LLMs are.

Their name sake isn't a very good description of that they actually are.

Mega Comrade fucked around with this message at 11:56 on Dec 17, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Tei posted:

Maybe part of the reason the human brain is so slow is because is mechanical. Biological cells must actually build new connections and chemistry changes (molecules) actually have to move.

A computer can learn a new language is fractions of a second, but for a human brain it takes years.

We use to change the definition of inteligence every 10 years.
Then in the 2000-ish we started updating the definition every year.
And in 2023, the definition of intelligence need to be updated every week to opt-out progress from IA.

So of course, we need to do the same with imagination.

No.
This is all nonsense.

vegetables posted:

Do we actually know that AI hallucinations are meaningfully different from human confabulation, or are we just claiming that’s the case?


Yes we do. We know exactly what they are and why they are effectively 'baked in' to how LLMs work and why extensive fine turning can reduce them.
They are not really anything like human hallucinations, it's just a name that was coined.

If they hadn't been called hallucinations, but instead just been called prediction anomalies or something, we wouldn't be having this discussion.

Mega Comrade fucked around with this message at 13:20 on Dec 17, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SaTaMaS posted:

It's missing introspection, which may be added in ChatGPT 5 with "Q*" but it has self-learning, both in the form of context windows modifying future output in a session, and user feedback being used to fine-tune future model releases.

I don't follow how either of those examples are self learning?

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

aw frig aw dang it posted:

This thread has done a great job of proving cinci zoo sniper correct w/r/t this topic, and now it can be gassed. Thank you.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

BougieBitch posted:

you've always been able to do that in Photoshop or whatever even from a blank canvas if you happen to know what instructions to give it.


Photoshop didnt have copyrightable material fed into it though so I don't think the comparison works here. Photoshop is worlds closer to traditional art creation than AI image generators are.

I think the law will end up in place where the large model companies have to do everything they can to restrict copyright regurgitation but with everyone understanding it can't be totally prevented. Similar to how social media companies aren't held liable for hate speech on their platforms as long as they show they are actively trying to stop it.

Mega Comrade fucked around with this message at 10:21 on Dec 23, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
We already have human controlled drone warfare. The various militaries around the world have been experimenting with machine learning AIs controlling them for a while

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Marketing departments in software have ruined lots of perfectly good words.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

KwegiboHB posted:


It's been wild watching staunch leftists do a complete 180 and suddenly take up arms to protect capitalism. Wild.


This take was just as dumb when crypto bros were using it.

Copyright is a crap system but it's all artists have to protect their work and scrape a living.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Great and if all those companies making profit off copyrighted material in their models switch to public domain sources all those law suits can be dropped.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
I'm not sure if it's been discussed but the NYT lawsuit is pretty interesting and seems stronger than a lot of others.

They have over 100 examples in it of verbatim NYT articles being spat out and also examples of hallucinations that the model then claims is from the NYT, so you have copyright and damage of reputation. Maybe Google books will cover this but as some of them are such a large amount of the articles it also might not.

https://nytco-assets.nytimes.com/2023/12/Lawsuit-Document-dkt-1-68-Ex-J.pdf

KwegiboHB posted:

That's a great sentiment except I've read these lawsuits and the law and there's greater than even odds that what these companies are doing is legal. What then?

Then nothing, they make millions, artists get screwed over again, you get to play with your toys.

KwegiboHB posted:

The reason you can only "scrape" a living is because billionaires have already stolen all of your money. Tax the poo poo out of them. Steal it back.


When that starts happening you can dismantle copyright all you like and I won't complain but you don't remove a system, regardless of how poor until you have a replacement.

Mega Comrade fucked around with this message at 10:22 on Jan 5, 2024

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
I'm not missing at all. I'm fully aware copyright winning isn't going to magically make life better for artists. But I also don't see why i should be rooting for Microsoft.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
What are you on about?

Are you seriously saying we should write in such a way to try and influence further models?

I know your understanding of this technology has been shown to be lacking but your trolling at this point surely?

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
If this technology is going to take massive amount of jobs as you have claimed, then those fears are founded arnt they?

You can't have it both ways, that this technology is revolutionary and will change the entire labour market and argue leftest are overreacting.

Mega Comrade fucked around with this message at 08:42 on Jan 6, 2024

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
What is the alternative action? To champion UBI and protections for the working class? We do that already.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Monopolies are already forming around this technology.

It will be monopolised regardless of the copyright outcomes.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Open models won't exist either way in time. How the technology has to be scraped and trained won't work with the problems of model collapse and the worlds main sources for data (Reddit, stack overflow etc) already closing off their APIs.

We are heading to a few companies owning the keys to the kingdom, I don't see much difference if it's Disney or Microsoft.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SCheeseman posted:

Anyone can have the "keys" to SDXL and LLaMA2 and those models are useful today.

iirc model collapse is something that happens when a model is trained on it's own output? If open models can exist they can train on closed models, the quality may not be as good, but it may result in something competitive or at least useful. Or learning data could be scraped instead of accessed via an API which is messier and would make the web admins grumpy.

A SETI@home-like distributed training system would help with a truly grassroots generative AI, I've only heard talk of this rather than anything concrete or real though

They are useful now but they already have limits. What they can do now can be stretched through various technics but eventually you need data and training and that causes issues for future open models.

At the moment you can use another model to train, it's been shown to be incredibly effective and a fraction of the cost but I don't see a future where the big companies like OpenAI continue to allow that, they have already moved against the various startups reselling their API with custom functions on top, I think they will move against researchers eventually too, they have made it clear they are a for profit company and want to be the main game in town. Allowing open models to flourish off their back isn't in their interest.

Classic web scraping is possible but it's far far harder to do and pretty easily blocked. It's a possible path but a slow one with huge hurdles.

The future of open models is on shaky ground.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

KwegiboHB posted:


That's it, no mess or fuss. That's all you need to do now and bam you have your own local model that, depending on your hardware, can rival or exceed ChatGPT 4. Today. Now. Stop acting like this is hard.

Lol stop acting like you understand this technology, every post you make shows you do not.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Mistral claim Mixtral 8x7B matches many metrics of GPT3.5, not GPT4.

Stop posting falsehoods. Stop strawmanning people's posts.

Adbot
ADBOT LOVES YOU

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

KwegiboHB posted:

The mixes and merges of Mixtral 8x7B, not the base model. I'm going to bed, have fun now.

No mix of merge of Mistral 8x7B surpasses gpt4.

If it does I'm sure you can provide some evidence. I'm happy to be corrected.

But this all misses the point. I wasn't even arguing about running local models, I was talking about the future of open source models and where they will get their data and training from.

Mistral has opened up it's models but it's not an open source non profit, it's a private startup with huge backing. Releasing models this way is good for building hype but not for making profit which is what it will inevitably want to do.

Can an actual none profit compete with these companies long term? I don't think it can which means the future of open models is going to be determined by these for profit companies, which historically hasn't worked out well for openess.

Mega Comrade fucked around with this message at 13:15 on Jan 6, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply