Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Tei posted:

Maybe they can research HOW to keep track of how documents affect the training data, then subtract that training.

Correct me if I’m wrong but isn’t the thing with ai/ml/nn/etc that this is, or is so far, impossible to do? Something something black box can’t look into the weights to figure out why specifically it does this and not that?

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007


It's not at all comparable to a Bayesian (or any other kernel function). With those, you are basically taking an average of your inputs as you go along. The trick with those is that the data is actually going into the system (and then some function is applied to the data). I'm going to assume you're talking about diffusion models here, but most generative models are similar. With a diffusion model, you don't ever put an image into it. Instead, you have it guess at what the image should be, and you turn the specific way it was wrong into a big multidimensional number called a vector, and it's that vector that gets applied to the weights. But importantly, that vector isn't the original image - it's a description of how far the guess was from that image. Which means that it only applies to that guess at that specific point in training. If you tried to undo that weight application to a finished model, you'd just gently caress the model.

Boris Galerkin posted:

Correct me if I’m wrong but isn’t the thing with ai/ml/nn/etc that this is, or is so far, impossible to do? Something something black box can’t look into the weights to figure out why specifically it does this and not that?

This is one of those things where the scientific terminology is way more precise than the layman understanding of the same term. When people hear "it's a black box and we don't know what influenced it" they think "science hasn't found the answer yet but we could if we just knew more." The reality is that due to the way the math works, we cannot know the influence of each input on the weights because the math doesn't allow it. It's not even a meaningful question.

It's much like how the Heisenberg uncertainty principle doesn't mean that "we can't know the exact speed and position of a particle because we can't make precise enough instrumentation." It means that "beyond a certain degree of certainty, the mathematical relationship of speed and position are fundamentally linked and the question literally does not make mathematical sense."

That's also how we know that the copyrighted data is not in the model. It's not hiding where if only we had better tools, we could find it. It's literally not there.

Edit: It is possible to end up with actual data encoded into your model. This is called overfitting and is very undesirable. Lots of work is put into making sure it doesn't happen, or at least happens as little as possible.

KillHour fucked around with this message at 23:47 on May 23, 2023

Tei
Feb 19, 2011
Probation
Can't post for 14 hours!

Boris Galerkin posted:

Correct me if I’m wrong but isn’t the thing with ai/ml/nn/etc that this is, or is so far, impossible to do? Something something black box can’t look into the weights to figure out why specifically it does this and not that?

I just have show you how is possible for a different blob of numbers, a bayesian filter. So I say maybe is possible for their blob of numbers.

And is moot point. They can train the whole dataset again, but withouth that infrininging document. The "waaaah.. is expensive" should be ignored.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
This video really just makes me wish I could find a good interview with Daniel Dennet on AI. Dude is incredibly opposed to it for some (what I think are) incredibly good reasons (the disaster around how what amounts to "counterfeit people" is going to interact with a society that already has a problem with eroding institutional trust and and is increasingly rejecting the idea that things can be true) and what can be done about it.

Tei posted:

And is moot point. They can train the whole dataset again, but withouth that infrininging document. The "waaaah.. is expensive" should be ignored.

No one has ever said that the couldn't, and in fact "has to retrain the model" has been the literal default stance for everyone talking about it so far on the issue, so who the gently caress are you even arguing with on that point?

GlyphGryph fucked around with this message at 23:41 on May 23, 2023

KillHour
Oct 28, 2007


Tei posted:

I just have show you how is possible for a different blob of numbers, a bayesian filter. So I say maybe is possible for their blob of numbers.

As I just explained, it is not because of the fundamental differences between a diffusion model and a bayesian filter. Not in a "we don't know how" sense, but in a "we can mathematically prove it's impossible" sense.

Tei
Feb 19, 2011
Probation
Can't post for 14 hours!

KillHour posted:

As I just explained, it is not because of the fundamental differences between a diffusion model and a bayesian filter. Not in a "we don't know how" sense, but in a "we can mathematically prove it's impossible" sense.

Okay, KillHour, I accept your opinion.

KillHour
Oct 28, 2007


Tei posted:

Okay, KillHour, I accept your opinion.

Math isn't an opinion, and I don't own it.

Saying "figure it out" is like saying "well nobody has created a machine that makes free energy through perpetual motion yet, but it's surely possible!"

Edit: To be clear - the math involved isn't particularly complicated. It's undergrad level linear algebra. The researchers in these fields really, thoroughly, truly understand the math. Matrix multiplications are not magic. The reason these systems are cutting edge is because of their scale and cost in terms of computing resources, not because of some recently discovered exotic math.

KillHour fucked around with this message at 23:55 on May 23, 2023

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Tei posted:

Okay, KillHour, I accept your opinion.

This encouragement of this sort of attitude, an inability to recognize their is an actual underlying truth behind factual statements and that words aren't magic that distorts reality to make it true, is one of the big risks of these next-generation AIs

Tei posted:

And is moot point. They can train the whole dataset again, but withouth that infrininging document. The "waaaah.. is expensive" should be ignored.

And if they do, what actual benefit has been gained to anyone? It will still be able to produce infringing images, right? Isn't that what actually matters?

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

KillHour posted:

"Look man, it's not Joe Rogan saying those things, it's the published researcher he has on his show saying it while he just asks them questions!"

Correct the problem with Joe Rogan is he platforms bad people and is credulous with everyone. Conover tends to platform good poeple and generally not credulous or naive.

quote:

All I said is that he's not neutral. He clearly has a horse in this race and is going to have guests on that agree with him. You're welcome to watch and form your own conclusions, but a biased interviewer giving a friendly platform for people to push an agenda is not a new thing.

Nobody is neutral and everyone is pushing an agenda. Watch the video or don't, but don't waste 3 posts having a meta-argument about a video because you're worried you won't agree with it.

quote:

The exact same thing happened a few pages ago with Sam Altman and that dude who just so happens to be super good friends with a bunch of nazis.

Cool, watch or or don't watch it.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.
It’s actually kind of astonishing how basic most of the math is. It’s just intuition on the best way to use it.

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

GlyphGryph posted:

I'm familiar with Bender, though - she's the Stochastic Parrot person. Some good broad stroke criticisms and some really braindead hot takes over the last couple years but overall decently knowledgeable and so I might eventually give it a go just to hear what she has to say. She did a panel not long ago with Christopher Manning, who is very pro-AI but and has even more blisteringly braindead hot takes.

Timnit Gebru is another author of the Stochastic Parrot paper and and was previously head of AI ethics at google until they had an issue with one of her papers.

I didn't require anyone to watch this video but the amount of posts from people complaining that this video won't confirm their biases but not watching it is pretty LOL.

Like, I never said it was the be-all of AI videos, just a video I found pretty interesting.

KillHour
Oct 28, 2007


GlyphGryph posted:

This encouragement of this sort of attitude, an inability to recognize their is an actual underlying truth behind factual statements and that words aren't magic that distorts reality to make it true, is one of the big risks of these next-generation AIs

My goon, have you been living under a rock since before the internet? We've been living in a post-truth society for literally decades (or probably more accurately, since forever. The search of fundamental truths as a common communal goal of society is a thing that rarely happens, and is often looked back on as golden ages).

The biggest risk of [arbitrary technology] is that we don't know what the risks are yet. The chance that it ends up being something we already know about is pretty slim.

Jaxyon posted:

Correct the problem with Joe Rogan is he platforms bad people and is credulous with everyone. Conover tends to platform good poeple and generally not credulous or naive.

"The problem with 'x' pushing an agenda is that I disagree with the agenda" is a bad take. I'm not saying Conover is a bad dude, even if I don't agree with him on this thing. I'm saying don't just be like "Oh look, here's a super interesting video" and get mad when I point out that the video is biased. Just admit that it's biased and people can decide if they want to watch it.

Jaxyon posted:

Timnit Gebru is another author of the Stochastic Parrot paper and and was previously head of AI ethics at google until they had an issue with one of her papers.

I didn't require anyone to watch this video but the amount of posts from people complaining that this video won't confirm their biases but not watching it is pretty LOL.

Like, I never said it was the be-all of AI videos, just a video I found pretty interesting.

I'm very familiar with both her and Conover, which is why I don't need to watch the video to know that it deserved a disclaimer. I read the paper she wrote. It's very famous.

KillHour fucked around with this message at 00:09 on May 24, 2023

Count Roland
Oct 6, 2013

Well, speaking of sharing resources, does anyone have links to good text articles on this subject? I don't really care if its for or against, I'm just looking for interesting/insightful arguments.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Jaxyon posted:

I didn't require anyone to watch this video but the amount of posts from people complaining that this video won't confirm their biases but not watching it is pretty LOL.

It's D&D, posting a video without even a basic summary in place of making arguments of your own and headlining people who are openly biased against the subject is always going to get this sort of response. Have you considered making better posts next time? Then perhaps the conversation will be more productive.

(also, I'm watching the damned video)

GlyphGryph fucked around with this message at 00:11 on May 24, 2023

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

KillHour posted:

"The problem with 'x' pushing an agenda is that I disagree with the agenda" is a bad take. I'm not saying Conover is a bad dude, even if I don't agree with him on this thing. I'm saying don't just be like "Oh look, here's a super interesting video" and get mad when I point out that the video is biased. Just admit that it's biased and people can decide if they want to watch it.

I'm not mad that you pointed out a video is biased. I am annoyed that your criticisms are incredibly naive. I absolutely 1000% agree it's biased, what a silly criticism to have.

A video? BIASED? Nooooooooooooooooooooooooooooooooooooooooooooo

All videos are biased, everyone has an agenda. Unbiased takes aren't real. What a smoothbrain hot take.

I'm more impressed how hard you are trying to explain how you don't need to watch a video that might conflict with your opinion. Then don't watch, but also don't type 4 posts about how upset you are that someone disagrees with you and made a video you haven't watched.

If you have problems with the content, you're going to have to get off your rear end and watch the video.

GlyphGryph posted:

It's D&D, posting a video without even a basic summary in place of making arguments of your own and headlining people who are openly biased against the subject is always going to get this sort of response. Have you considered making better posts next time? Then perhaps the conversation will be more productive.

I did post post a summary, and said that Conover is biased because he's a content creator.

Maybe read posts better next time?

(USER WAS PUT ON PROBATION FOR THIS POST)

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

KillHour posted:

Math isn't an opinion, and I don't own it.

Saying "figure it out" is like saying "well nobody has created a machine that makes free energy through perpetual motion yet, but it's surely possible!"

Edit: To be clear - the math involved isn't particularly complicated. It's undergrad level linear algebra. The researchers in these fields really, thoroughly, truly understand the math. Matrix multiplications are not magic. The reason these systems are cutting edge is because of their scale and cost in terms of computing resources, not because of some recently discovered exotic math.

Wait, is AI literally just another Ax = b problem? (I don’t do AI but I do numerical computation stuff, and everything we do is solving Ax = b lol)

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Jaxyon posted:

I did post post a summary, and said that Conover is biased because he's a content creator.

Maybe read posts better next time?

You posted the summary after the meta-conversation about the video had already been kicked off, my man. You are continuing to propagate the conversation about the video instead of its actual contents even now - and doing so by misconstruing the statements other people are making, which is only going to make the conversation worse. C'mon.

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

GlyphGryph posted:

You posted the summary after the meta-conversation about the video had already been kicked off, my man. You are continuing to propagate the conversation about the video instead of its actual contents even now - and doing so by misconstruing the statements other people are making, which is only going to make the conversation worse. C'mon.

I posted video, someone reminded me to post a summary, and then I did so. Before you ever replied.

I just read every single post between this one and my summary, and not a single person, including you, has talked about any of the content in my summary or the video.

It has been people making meta-commentary about the people in the video.

You want to talk about the contents? Talk about the contents.

KillHour
Oct 28, 2007


Jaxyon posted:

I'm not mad that you pointed out a video is biased. I am annoyed that your criticisms are incredibly naive. I absolutely 1000% agree it's biased, what a silly criticism to have.

A video? BIASED? Nooooooooooooooooooooooooooooooooooooooooooooo

All videos are biased, everyone has an agenda. Unbiased takes aren't real. What a smoothbrain hot take.

I'm more impressed how hard you are trying to explain how you don't need to watch a video that might conflict with your opinion. Then don't watch, but also don't type 4 posts about how upset you are that someone disagrees with you and made a video you haven't watched.

If you have problems with the content, you're going to have to get off your rear end and watch the video.

I gave a disclaimer that Conover, specifically, complains loudly and often about AI. I know he does this because this is not the first video he did on it. In fact, that link has a comment that I left an entire month ago highlighted as proof that I watched it.

I do not, in fact, have to watch an 82 minute video featuring him and a person famous for writing a paper that got her cowriter fired from Google to know that the video might be a tad biased.
You sure are having a meltdown over a relatively minor clarification / disclaimer.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Jaxyon posted:

I just read every single post between this one and my summary, and not a single person, including you, has talked about any of the content in my summary or the video.

I followed up on your vague "she's published!" with the information I had about one of the people involved while I actually watched the video.

quote:

You want to talk about the contents? Talk about the contents.

I was under the assumption that you wanted to talk about the contents, which is why I was watching it.

Well, so far the content has been 90% Conover bullshitting and nothing said of actual substance, and if you're not interested in discussing the content either (since you're talking about the video and the people in it, like everyone else) I guess I can just assume the rest of it is the same and there's nothing to discuss. Saves me another 40 minutes, thanks.

If you want to talk about an actual point in the video, why not actually make the point yourself and we can have an actual discussion about it?

GlyphGryph fucked around with this message at 00:29 on May 24, 2023

KillHour
Oct 28, 2007


Boris Galerkin posted:

Wait, is AI literally just another Ax = b problem? (I don’t do AI but I do numerical computation stuff, and everything we do is solving Ax = b lol)

Actually, one of the most important requirements for a neural network is that you need to use a non-linear activation function specifically because otherwise the entire system will be reduceable to a linear equation :v:

I found this out the hard way when I wrote my first neural net like a decade ago, because I did, in fact, just make an overcomplicated linear equation and was very confused as to why it was poo poo.

If you want a good technical video on why this is, I recommend this one that goes into creating one from scratch with very good visualizations:
https://www.youtube.com/watch?v=hfMk-kjRv4c

If you don't want a good technical video about it but still need an hour-long video recommendation so you don't have to watch Adam Conover complain about capitalism, I can recommend this one, which is like the first video except completely sarcastic and hilariously pointless:
https://www.youtube.com/watch?v=Ae9EKCyI1xU

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

GlyphGryph posted:

I followed up on your vague "she's published!" with the information I had about one of the people involved while I actually watched the video.

I was under the assumption that you wanted to talk about the contents, which is why I was watching it.

Well, so far the content has been 90% Conover bullshitting and nothing said of actual substance, and if you're not interested in discussing the content either (since you're talking about the video and the people in it, like everyone else) I guess I can just assume the rest of it is the same and there's nothing to discuss. Saves me another 40 minutes, thanks.

If you want to talk about an actual point in the video, why not actually make the point yourself and we can have an actual discussion about it?

1. People are using the wrong terms in calling AI "AI", it's, like many techbro trends, mostly bullshit overhyping.
2. People are pretending chatgpt 4 is AGI, and it's not
3. OpenAI is not being open about it's AI and is not publicly documenting how it's product was trained, which is odds with it's stated original intention as a non-profit org to evaluate and ethically use AI
4. The Microsoft research paper comes right out the gate citing a race science paper

Jaxyon fucked around with this message at 00:49 on May 24, 2023

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

KillHour posted:

Actually, one of the most important requirements for a neural network is that you need to use a non-linear activation function specifically because otherwise the entire system will be reduceable to a linear equation :v:

I found this out the hard way when I wrote my first neural net like a decade ago, because I did, in fact, just make an overcomplicated linear equation and was very confused as to why it was poo poo.

I do remember one of my former office mates from a while ago who dabbled with AI/ML talking over and over about sigmoid this and sigmoid that.

KillHour
Oct 28, 2007


Jaxyon posted:

1. People are using the wrong terms in calling AI "AI", it's, like many techbro trends, mostly bullshit overhyping.

Companies have been marketing things as "AI" for decades, and it's always been understood that we are not talking about AGI or machine sentience. There's a lot of hype, but current models are genuinely impressive and useful for many, many tasks. Is it overhyped? Yeah, probably. But it's definitely more iPhone than Bitcoin.

Jaxyon posted:

2. Poeple are pretening chatgpt 4 is AGI, and it's not

Who? People in this thread? Famous people? Be specific, and also tell me why it matters. I can find someone who says any crackpot thing I want.

Jaxyon posted:

3. OpenAI is not being open about it's AI and is not publically documenting how it's product was trained, which is odds with it's stated original inention as a non-profit org to evaluate and ethically use AI

Is it lovely that the company was founded to be non-profit and then went for-profit? Sure. But :capitalism:.

Jaxyon posted:

4. The Microsoft research paper comes right out the gate citing a race science paper

Maybe it would help if you actually named the paper being cited here because there are several dozen references in that paper.

Count Roland
Oct 6, 2013

Can someone speak to this "race science paper" is? I don't even know what that is, nor is it clear why it is in a computing paper, nor why it's being brought up in YouTube videos.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Jaxyon posted:

People are using the wrong terms in calling AI "AI", it's, like many techbro trends, mostly bullshit overhyping.

What makes these systems less "AI" than the other AI's we've been using for the past several decades, like video game AI?

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Count Roland posted:

Can someone speak to this "race science paper" is? I don't even know what that is, nor is it clear why it is in a computing paper, nor why it's being brought up in YouTube videos.


https://twitter.com/TimoPG/status/1645567315407409155?s=20

I have no idea who this Twitter guy is but it came up first on search. I was curious about it too.

E: https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf

Boris Galerkin fucked around with this message at 01:03 on May 24, 2023

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

KillHour posted:

Companies have been marketing things as "AI" for decades, and it's always been understood that we are not talking about AGI or machine sentience. There's a lot of hype, but current models are genuinely impressive and useful for many, many tasks. Is it overhyped? Yeah, probably. But it's definitely more iPhone than Bitcoin.

Iphone is specifically cited in the video, as not really an innovation, but rather a bunch of decades old tech that was packaged in a slick design and marketing.

And most laypeople believe, given the way they're talking about it, that current "AI" is a lot closer to AGI than it is, and that's specifically because of the techbro hype.

quote:

Who? People in this thread? Famous people? Be specific, and also tell me why it matters. I can find someone who says any crackpot thing I want.

As the video says, specifically the "pause" letter seems to be making that argument and recommending we slow down for safety...but instead of safety for consumers, for sensible regulation, it argues that we're in danger of an AGI.

quote:

Is it lovely that the company was founded to be non-profit and then went for-profit? Sure. But :capitalism:.

As the video says, it's likely they never ever planned to really be non-profit, given the amount of money being shoved into the tech. But the point is, people are raising the alarm about the dangers of ChatGPT as an AGI, but OpenAI is completely secretive about how it trains it's model and other things, for "security" reasons that are much more likely money reasons.

quote:

Maybe it would help if you actually named the paper being cited here because there are several dozen references in that paper.

It's mentioned in the video, you can go ahead and watch it. I get a summary and discussing some of the points but if I'm going to post a video have you argue with points that another person made, you can go ahead and listen to their points.

edit: ^^^^^^ Its the Bell Curve one as the person above me states.

Jaxyon fucked around with this message at 01:10 on May 24, 2023

SubG
Aug 19, 2004

It's a hard world for little things.

Boris Galerkin posted:

https://twitter.com/TimoPG/status/1645567315407409155?s=20

I have no idea who this Twitter guy is but it came up first on search. I was curious about it too.

E: https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf
The opinion piece has a wikipedia article.

Citing it to define "intelligence", especially in 2023, is pretty lol but for whatever it's worth the most recent version of the paper (put up a little less than a month after the original version according to the timestamps on arxiv) removes the reference.

KillHour
Oct 28, 2007


Count Roland posted:

Can someone speak to this "race science paper" is? I don't even know what that is, nor is it clear why it is in a computing paper, nor why it's being brought up in YouTube videos.

I don't know. I have the paper open and I can't figure out what they are referring to. However, it's important to note that the researcher in that video making accusations about the paper is, herself, cited in that paper.

quote:

9.3 Bias
Models like GPT-4 are trained on data from the public internet, among other data sources, like carefully
curated human instructions used in RL pipelines. These datasets are riddled with various sources of inherent
biases [BGMMS21, BBDIW20, HS16, BB19]. It has been demonstrated by previous research that when used
to generate content, make decisions, or assist users, LLMs may perpetuate or amplify existing biases. We have
demonstrated throughout the paper that GPT-4’s capabilities and behaviors represent a “phase transition”
in capabilities compared to earlier models and observations on earlier models do not necessarily translate.
Therefore, it is important to understand whether and how GPT-4 exhibits biases, and more importantly, how
the emerging capabilities of the model can be used as part of mitigation strategies.

...

[BGMMS21] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On
the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021
ACM Conference on Fairness, Accountability, and Transparency, pages 610–623, 2021.

The authors here are basically saying "making the model bigger actually seems to fix a lot of these biases that you claim are inherent in larger models."

Now, that's not to say she's necessarily doing anything in bad faith by claiming that the paper has citation issues, but it's more than a little sus.

SubG posted:

The opinion piece has a wikipedia article.

Citing it to define "intelligence", especially in 2023, is pretty lol but for whatever it's worth the most recent version of the paper (put up a little less than a month after the original version according to the timestamps on arxiv) removes the reference.

This is a nothingburger. It's a weird choice for sure, but citations in academia are all over the place and you can't take an out of context citation to mean an endorsement.

KillHour fucked around with this message at 01:13 on May 24, 2023

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

GlyphGryph posted:

What makes these systems less "AI" than the other AI's we've been using for the past several decades, like video game AI?

Those uses were also misuses of the term, and often people don't realize how video game "AI" works unless they're a programmer.

They're both using AI as a shorthand, but the conflation with AGI has really ramped up with the current hype around ChatGPT and similar.

Nobody thought they were going to lose their jobs to the CoD bots.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Jaxyon posted:

As the video says
As the video says,
It's mentioned in the video, you can go ahead and watch it.

I for one am not planning on arguing against people who aren't in this thread. I'm definitely not going to argue with people who aren't in this thread, complaining about other vague people who aren't in this thread. And I don't have any clue why you think anyone would?

If you want to make a point, make it and support it, if you don't want to discuss things, don't.

Jaxyon posted:

Those uses were also misuses of the term, and often people don't realize how video game "AI" works unless they're a programmer.

Why? Even if it was, at this point its been well enough established that it isn't now.

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

KillHour posted:

I don't know. I have the paper open and I can't figure out what they are referring to. However, it's important to note that the researcher in that video making accusations about the paper is, herself, cited in that paper.

They specifically go over that in the video, that their own paper is being cited and misused.

KillHour
Oct 28, 2007


Jaxyon posted:

Those uses were also misuses of the term, and often people don't realize how video game "AI" works unless they're a programmer.

They're both using AI as a shorthand, but the conflation with AGI has really ramped up with the current hype around ChatGPT and similar.

Nobody thought they were going to lose their jobs to the CoD bots.

That's funny that you say that because the dude who coined the term had this to say.

quote:

In 1979 McCarthy wrote an article entitled "Ascribing Mental Qualities to Machines". In it he wrote, "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem-solving performance."

Jaxyon posted:

They specifically go over that in the video, that their own paper is being cited and misused.

Oh man, you don't say. I'm shocked that they feel that way. Next you're going to tell me that their mention of that other citation is in the context of supporting their claim that everyone should just ignore that paper and trust their paper instead.

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

GlyphGryph posted:

I for one am not planning on arguing against people who aren't in this thread. I'm definitely not going to argue with people who aren't in this thread, complaining about other vague people who aren't in this thread. And I don't have any clue why you think anyone would?

If you want to make a point, make it and support it, if you don't want to discuss things, don't.

If you're arguing against research, you should probably read the paper, and if you're against some academics holding a position, you should probably watch the video where they present that position instead of my summary. My summary exists in order for you to determine whether or not you want to spend time on the video.

If I wanted to just post their thoughts as my own in the thread, I could do that, but it wouldn't be honest.

If you don't want to engage with me on this, feel free to stop. I'll engage with whatever issues people, as best I'm able, to a point.

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.

KillHour posted:

Oh man, you don't say. I'm shocked that they feel that way. Next you're going to tell me that their mention of that other citation is in the context of supporting their claim that everyone should just ignore that paper and trust their paper instead.

LOL did AI save your puppy or something?

You're taking this awful personally.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
You both are taking it super personally, tbh.

reignonyourparade
Nov 15, 2012
Killhour is taking it mildly personally and Jaxyon is taking it super personally, actually

Jaxyon
Mar 7, 2016
I’m just saying I would like to see a man beat a woman in a cage. Just to be sure.
Maybe, but I take everything personally. :colbert:

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007


Jaxyon posted:

If you're arguing against research, you should probably read the paper, and if you're against some academics holding a position, you should probably watch the video where they present that position instead of my summary. My summary exists in order for you to determine whether or not you want to spend time on the video.

If I wanted to just post their thoughts as my own in the thread, I could do that, but it wouldn't be honest.

If you don't want to engage with me on this, feel free to stop. I'll engage with whatever issues people, as best I'm able, to a point.

I don't want to argue against their position. In fact, I don't think they should be listened to any more than Elon Musk because they're basically shilling at this point. That's the entirety of what I'm saying. I'm not being pro-AI. I'm being anti-shill. I said exactly the same thing about the pro-AI video earlier in the thread that was Staning right wing nutjobs.

reignonyourparade posted:

Killhour is taking it mildly personally and Jaxyon is taking it super personally, actually

I'm taking it mildly personally because I was ran off by Cinci in the last thread for cold linking a Robert Miles video, and I wasn't being anywhere near as annoying about it. :colbert:

KillHour fucked around with this message at 01:27 on May 24, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply