Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Watermelon Daiquiri posted:

is an ai chatbot a person now

If a person creates an AI and an AI generated art, the copyright of the art belongs to the creator of the AI.

So I guess if you create an AI to give legal advice, all the legal ramifications of that advice also fall on the creator of the AI.

Adbot
ADBOT LOVES YOU

Inferior Third Season
Jan 15, 2005

Watermelon Daiquiri posted:

is an ai chatbot a person now
It is if you make it a corporation. The AI chatbot can help you fill in the paperwork.

Name Change
Oct 9, 2005


"AI" in Open AI, ChatGPT, various image generators, etc. is a marketing term to disguise what is actually going on.

"The Cloud!!!" is someone else's computer. "AI!!!" is someone else's work.

The algorithm that refines returns of scraped data (that's still often incorrect or incoherent) is the result of countless man hours of work, and that's before you get to the people who actually made the data being scraped.

I mean everyone ITT knows this, but it's bothering me more and more.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
What! You're telling me this generated AI image isn't actually totally original!

exmachina
Mar 12, 2006

Look Closer

Mega Comrade posted:

If a person creates an AI and an AI generated art, the copyright of the art belongs to the creator of the AI.

So I guess if you create an AI to give legal advice, all the legal ramifications of that advice also fall on the creator of the AI.

It seems pretty likely that AI art will in fact not be able to be copyrighted in the US

Kwyndig
Sep 23, 2006

Heeeeeey


Yeah the Supreme Court has already said if it's not the direct action of a human, and instead some other thing, it's not applicable to copyright. There was a case where a guy tried to copyright a monkey selfie that he lost.

BiggerBoat
Sep 26, 2007

Don't you tell me my business again.

T Zero posted:

In the subscription era, you can lose your access to letters

https://twitter.com/semaforben/status/1618683572839387138

This doesn't make any sense. Pretty sure fonts are a one time purchase unless something has changed in the printing and graphics industry. Which wouldn't surprise me if font creators went to a subscription based model like Adobe does with their software and it checks online to see if you are registered.

E

Or probably it works differently for website layouts now that I think about it.

BiggerBoat fucked around with this message at 13:48 on Jan 27, 2023

CmdrRiker
Apr 8, 2016

You dismally untalented little creep!

Inferior Third Season posted:

It is if you make it a corporation. The AI chatbot can help you fill in the paperwork.

I would like to invest in your AI chatbot's business.

Cheesus
Oct 17, 2002

Let us retract the foreskin of ignorance and apply the wirebrush of enlightenment.
Yam Slacker

Boris Galerkin posted:

“Chose not to renew due to price increase” is not the same thing as “can’t afford.”
I would love to know more details in this case.

In software, there may be a semi-reasonable explanation that a subscription and associated increase go toward making the sellers massive profits adding new functionality and fixing issues.

But a static product like a font? With little to no room for improvement?

I want to believe the background was a :smug: "It's NPR! Of course they'll pay a 60% yearly increase!".

CmdrRiker
Apr 8, 2016

You dismally untalented little creep!

Font is a copyrightable piece of software, but it never occurred to me that it can also be regarded a saas product.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Kwyndig posted:

Yeah the Supreme Court has already said if it's not the direct action of a human, and instead some other thing, it's not applicable to copyright. There was a case where a guy tried to copyright a monkey selfie that he lost.

This never made it to the Supreme Court fwiw. It was the US Copyright Office who said lol no because a non-human created the selfie. Another lower court dismissed a separate case citing the non-human created aspect.

Main Paineframe
Oct 27, 2010

Boris Galerkin posted:

This never made it to the Supreme Court fwiw. It was the US Copyright Office who said lol no because a non-human created the selfie. Another lower court dismissed a separate case citing the non-human created aspect.

It wasn't just that a non-human created the selfie, but that there was no human involvement or creative intent at all - the selfie was a complete accident and a total surprise to the human. If the human had trained the monkey to take pictures and then handed the camera to the monkey, the human would have been able to claim copyright on the resulting pictures.

Last Chance
Dec 31, 2004

BiggerBoat posted:

This doesn't make any sense. Pretty sure fonts are a one time purchase unless something has changed in the printing and graphics industry. Which wouldn't surprise me if font creators went to a subscription based model like Adobe does with their software and it checks online to see if you are registered.

E

Or probably it works differently for website layouts now that I think about it.

Lol yeah we had to switch from nice-but-not-free fonts to ugly rear end google fonts in order to cut costs on apps/websites. It’s quite a time to be alive.

Rogue AI Goddess
May 10, 2012

I enjoy the sight of humans on their knees.
That was a joke... unless..?

Watermelon Daiquiri posted:

is an ai chatbot a person now
Always have been.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

Name Change posted:

"AI" in Open AI, ChatGPT, various image generators, etc. is a marketing term to disguise what is actually going on.

"The Cloud!!!" is someone else's computer. "AI!!!" is someone else's work.

The algorithm that refines returns of scraped data (that's still often incorrect or incoherent) is the result of countless man hours of work, and that's before you get to the people who actually made the data being scraped.

I mean everyone ITT knows this, but it's bothering me more and more.

If someone asks you a question that you know the answer to, did you spontaneously get the answer or did you synthesize it from someone else's work in the past that you studied at one point?

DALL-E's answers to 'Wow I love your work! Who were your influences?' is 'Everybody and everything in the training data set'

Riven
Apr 22, 2002
Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:

Cheesus posted:

But a static product like a font? With little to no room for improvement?

It's really no different from licensing any other copyrighted material. If you want to use a certain song in a game or movie you have to license it, and these licenses are typically time limited. The same if often true to fonts used in your publication.

And don't underestimate font maintenance. If you want a font that looks consistent with different scripts used around the world, supports ligatures, right to left writing, proper keming, hinting… you usually have to pay. Fonts are hilariously complex nowadays.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Riven posted:

Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.

Yeah. These AIs are excellent at appearing intelligent, but they actually have zero understanding of what they are spitting out and can't even understand the most basic of concepts

shoeberto
Jun 13, 2020

which way to the MACHINES?

Riven posted:

Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.

That's what has been eating me. We already have many systems built upon knowledge graphs, which I would argue is more how human brains work with expertise. Even if you don't know the answer, you know where to look to verify the knowledge. I don't know Jennifer Anniston's birthday, but I know that I don't know it, and also know where to look to find the specific fact.

The system works much more like very convincing bullshitters, but without the ability to navigate social situations where they would get called out. They're just coldly giving out probabilistic answers with zero determinism. Most of the companies are going to find this out the hard way when rubber meets road, just like we went through with blockchain hype.

I mentioned to my friend yesterday that we'll probably see GPT-based startups getting naming rights to sports stadiums in the next year or so because we're getting back on this roller coaster, apparently.

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

Heck Yes! Loam! posted:

Currently, not legally

Star trek had a lot to say on this topic.

Stuff like ChatGPT is closer to a glorified set of IF/THEN statements that regularly run into walls or contradict themselves, than any sort of actual AI, let alone the level of AI that Star Trek characters like Data represent.

cinci zoo sniper
Mar 15, 2013




Main Paineframe posted:

Refer back to what I said earlier:

If someone dealing with a major bureaucracy openly says themselves that they're breaking the rules, the bureaucracy isn't going to go out of its way to check whether that's true. It'll just say "stop breaking the rules, dumbass". Neither the bar associations nor the court system want to waste time or effort peeling back the marketing jargon to define what DoNotPay is actually doing.

If DoNotPay says they're bringing in a robot lawyer, the system will respond "your robot lawyer doesn't have a law license", and that'll be that. If DoNotPay wants to argue that they're just providing a self-help tool and not an actual lawyer, then maybe they'd get a different result. But they're not willing to argue that, because it'd go against their marketing and against the story they're telling investors.

It's also quite possible that DoNotPay was always going to back out and claim Big Law forced them. There's been some suggestions that DoNotPay is actually fake, and that they're just using a fill-in-the-blanks template with some tweaks from human employees. Here's some excerpts from a long Twitter thread by someone who actually gave it a try:
https://twitter.com/KathrynTewson/status/1617918604242227200
https://twitter.com/KathrynTewson/status/1617930994346250240
https://twitter.com/KathrynTewson/status/1617937290252423169

Here's her full article, by the way. https://www.techdirt.com/2023/01/24/the-worlds-first-robot-lawyer-isnt-a-lawyer-and-im-not-sure-its-even-a-robot/

Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.

Evil Fluffy posted:

Stuff like ChatGPT is closer to a glorified set of IF/THEN statements that regularly run into walls or contradict themselves, than any sort of actual AI, let alone the level of AI that Star Trek characters like Data represent.

I was thinking about the holographic doctor.

Photons be free!

BabyFur Denny
Mar 18, 2003

Riven posted:

Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.

That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability.

Also this is basically the Benz Model 1 of AIs. Nobody thought seat belts or speed limits would be needed when they saw that car. Chatgpt is a big jump and AI is just going to progress further and further from here.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

BabyFur Denny posted:

That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability.

Also this is basically the Benz Model 1 of AIs. Nobody thought seat belts or speed limits would be needed when they saw that car. Chatgpt is a big jump and AI is just going to progress further and further from here.
ChatGPT looks more convincing, but it is not significantly closer to any idea of "understanding" or "reasoning" than older markov chain models - look at the post above illustrating the very simple lack of ability to know what even and odd numbers are. If it can't do that, it fundamentally can't do anything serious. The gap between where "AI" is now, and true AI is vast - for one, everything still uses gradient descent on GPUs, which takes billions of times more power than human neurons.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

BabyFur Denny posted:

That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability.

Also this is basically the Benz Model 1 of AIs. Nobody thought seat belts or speed limits would be needed when they saw that car. Chatgpt is a big jump and AI is just going to progress further and further from here.

In 2015 Musk promised fully autonomous vehicles within 3 years. Its now 8 years later yet they still try and drive on train tracks and think the Wells Fargo logo is a stop sign.

Electric Wrigglies
Feb 6, 2015

Mega Comrade posted:

In 2015 Musk promised fully autonomous vehicles within 3 years. Its now 8 years later yet they still try and drive on train tracks and think the Wells Fargo logo is a stop sign.

you talking about humans or computers? Because people driving into bridges with multiple signs saying you have to be this short to go under happens every single day.

Motronic
Nov 6, 2009

Electric Wrigglies posted:

you talking about humans or computers? Because people driving into bridges with multiple signs saying you have to be this short to go under happens every single day.

Are you seriously mounting a defense for beta testing really lovely machine learning on public highways here?

hobbesmaster
Jan 28, 2008

Evil Fluffy posted:

Stuff like ChatGPT is closer to a glorified set of IF/THEN statements that regularly run into walls or contradict themselves, than any sort of actual AI, let alone the level of AI that Star Trek characters like Data represent.

Chat GPT feels a lot like the ”Chinese room” thought experiment. It’s handing you back what kinda looks right without any real understanding.

Of course the next question is if that’s “all” our brains are doing as well.

cinci zoo sniper
Mar 15, 2013




BabyFur Denny posted:

That's just incorrect. Markov chain generators have been on the internet for 20+ years and they are absolutely poo poo compared to chatgpt. There's way more to it than word to word probability.

GPT is a family of autoregressive language models, and predicting the next word in a sequence of words is literally all that they do. “Autoregressive” means a feed-forward process where every step takes into the account a preceding history of some length. In contrast, Markov chains classically exhibit something maths call “memorylessness”, which in simple English means that predictions are independent of past (or future) states. Here’s a handy GPT-style model illustration:

cinci zoo sniper fucked around with this message at 19:16 on Jan 27, 2023

pumpinglemma
Apr 28, 2009

DD: Fondly regard abomination.

Markov chain generators basically work by going through existing literature and recording a gigantic table: for every pair of words x and y, what is the probability that word x immediately follows word y? And then it just generates sentences by sampling from the table. This is both conceptually and computationally easy and the results are predictably crap.

Older GPT iterations - I think I'm thinking of GPT-2, here - worked by instead asking: for every word x and every ten-word phrase y, what is the probability that word x immediately follows phrase y? The set of ten-word phrases is ridiculously huge and there's no hope of ever storing it explicitly, so instead you use gradient descent on weights in a neural network set up in "deep learning" fashion. The crucial point (and the reason deep learning was a breakthrough) is that while it's still "just" gradient descent, it's gradient descent on a really rich and well-structured parameter set that works really well for this prediction task. For example, if you train one on millions of sentences like "5967 + 3 = 5770", then the parameter set is rich enough for the gradient descent to be able to discover some heuristics for addition and have a hope of correctly completing sentences like "400 + 352 = " that it hasn't seen before. It doesn't "know" how addition works, it could certainly never explain it, and it won't have a 100% success rate, but it will have some heuristics formed by looking at examples. This is something our brains can do too - when you e.g. hear a piece of music and try to decide whether it's from Mozart or Pokemon, assuming the obvious tells like instrumentation are removed, you'll probably have a decent guess at which one but you won't be able to fully articulate why unless you're a music theorist. While this was a huge step forward, not being able to look more than ten words into the past was a massive handicap.

I don't know what they're doing, but modern GPT iterations seem to be capable of far more than this. DaVinci-001, the successor to GPT-3, was capable of answering one of my exam questions that required it to generate a couple of paragraphs of coherent text accurately converting a story problem into a linear programming problem. The story problem contained a ton of unfamiliar language that wouldn't have been in any CS textbooks in the training corpus, but despite this it would have scored a solid 13/15 and been in the top half of the class. I haven't yet tried DaVinci-003 (the newest version) on one of my exams but I suspect it would do pretty well. It's still a very very long way from "true AI", and it's still just gradient descent on a neural network. But if you asked someone in AI a decade ago whether gradient descent on a neural network would be able to take you to something like ChatGPT, they'd laugh in your face. When GPT-2 came out, everyone was really surprised that it was able to handle things like arithmetic problems better than contemporary "purpose-trained" AIs. The fact that modern AI is so capable despite just being gradient descent is legitimately surprising, not just from an outside view but also to people in the field, and the question of how much further it can go is a really interesting one - not just from the viewpoint of building stuff but also for what it says about how our own intelligence works.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

hobbesmaster posted:

Chat GPT feels a lot like the ”Chinese room” thought experiment. It’s handing you back what kinda looks right without any real understanding.

Of course the next question is if that’s “all” our brains are doing as well.

Yeah I always thought the Chinese Room was a copout. As if having the translation book in our actual heads is what makes us real for some reason.

hobbesmaster
Jan 28, 2008

Rocko Bonaparte posted:

Yeah I always thought the Chinese Room was a copout. As if having the translation book in our actual heads is what makes us real for some reason.

ChatGPT keeps making me think of the beginning of Blindsight

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Electric Wrigglies posted:

you talking about humans or computers? Because people driving into bridges with multiple signs saying you have to be this short to go under happens every single day.
Self-driving is nowhere near as good as human drivers. It still only works in ideal conditions and even then fails in all sorts of unpredictable cases. It turns out that driving requires reasoning, which is something no ML system on the horizon can do.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
It's the same thing as that 'p-zombie' bs which just reeks of 'we humans are ~special~' idealism and solipsism.

cinci zoo sniper
Mar 15, 2013




pumpinglemma posted:

I don't know what they're doing, but modern GPT iterations seem to be capable of far more than this.

GPT-3’s primary improvement came from it being more than 10 times larger than GPT-2. Model architecture has not changed significantly since “GPT-1”, in fact - OpenAI “just” keeps getting better at training and fine-tuning the same model, and keeps getting more money to spend on GPUs.

Last Chance
Dec 31, 2004

Heck Yes! Loam! posted:

I was thinking about the holographic doctor.

Photons be free!

captain america: i understand this reference

Electric Wrigglies
Feb 6, 2015

Motronic posted:

Are you seriously mounting a defense for beta testing really lovely machine learning on public highways here?

eh, it is a bit of a trolley problem because development of automation in driving could well reduce harm to people from driving (not just crashes but energy efficiency, time saving, etc) over the medium to long term. Cruise control is still killing people every day to this day even though it started out as a lovely PID loop and now they are quite refined.

A lot of attacks on automation / AI / machine learning consist of comparing the worst result of the technological solution with the best human outcome. e.g.
Supv: Hey joe, how about using the expert system to run the plant in stable operation?
Joe: What? I can beat that stupid thing every day, it can't even handle fault x without intervention
Supv: ok, what about last shift where feeder two was underfed for seven hours.
Joe: We are busy and tired, it's loving bullshit that you expect us to watch this thing all day long, you loving sit there for hours monitoring everything and not miss something.
Supv: that's what the expert system is for, to tirelessly watch and optim...
Joe: You're a gently caress-knuckle. I can beat that stupid thing every day.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Like, ffs, my brain just put that solipsism in there organically and I had to stop and think about how that actually worked in the context. My brain could pull that information together because it had the training for it. Gpt models are far from true sentience, but it does have some intelligence.

Family Values
Jun 26, 2007


If the law bot is allowed by the courts, how long until Louisiana or some other regressive poo poo hole decides that actually, the constitutional right to an attorney can be satisfied by a contract with DoNotPay.com, and shuts down all their public defender offices?

Then poors will get the legal equivalent of:

Adbot
ADBOT LOVES YOU

Mooseontheloose
May 13, 2003

Electric Wrigglies posted:

eh, it is a bit of a trolley problem because development of automation in driving could well reduce harm to people from driving (not just crashes but energy efficiency, time saving, etc) over the medium to long term. Cruise control is still killing people every day to this day even though it started out as a lovely PID loop and now they are quite refined.

A lot of attacks on automation / AI / machine learning consist of comparing the worst result of the technological solution with the best human outcome. e.g.
Supv: Hey joe, how about using the expert system to run the plant in stable operation?
Joe: What? I can beat that stupid thing every day, it can't even handle fault x without intervention
Supv: ok, what about last shift where feeder two was underfed for seven hours.
Joe: We are busy and tired, it's loving bullshit that you expect us to watch this thing all day long, you loving sit there for hours monitoring everything and not miss something.
Supv: that's what the expert system is for, to tirelessly watch and optim...
Joe: You're a gently caress-knuckle. I can beat that stupid thing every day.

The autonomous car and the "AI lawyer" have a similar problem in that they can't handle edge cases in a world where everything is almost an edge case. "Ideal road conditions" are never a real thing and as humans our brains have adopted on the road to filter the bombardment of information and AI cars can't even seem to do that. Take on top of that the number of close calls we don't hear about or that are reported to Tesla because of human intervention and we have a VERY incomplete picture of what is actually going on with the automated AI. Yah sure fine the poential benefits are there IF they can get the technology to work in the not best case scenario.

Essentially, what I am saying is you are a bit clouded by optimism here. You've stated the end goal and the process but we also keep seeing the numerous roadblock and technical challenges that they can't get around right now. It's not a human's are superior thing, its a they haven't proven the tech end thing. Also, we need less cars.

Somewhat on topic, I see Captcha's are focusing on sunflowers right now and I desperately want to know why they gently caress up AIs so much.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply