Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

Mega Comrade posted:

Leica already released a camera that tackles that.
Sony, Nikon and Canon are working to doing something similar.

Basically the camera will digitally sign the photo on creation and edits will be recorded to the file also. It will help organisations check the authenticity of photos, but it won't help the spread of misinformation online.

I came up with this idea a year ago while stoned: a camcorder that embeds a digital signature in the file, using blockchain so that altering would be impossible.

After thinking about it for a while (and smoking a lot more) I came to the conclusion that, once third parties have the algorithm, there is no way to prevent them from adding a watermark after the fact.

Adbot
ADBOT LOVES YOU

PhazonLink
Jul 17, 2010

repiv posted:

crypto-cameras are a can of worms in general because if the crypto scheme includes a per-device key then the same system which (ostensibly) proves a photo is real could also be used to trace its source, which is bad news for whistleblowers and dissidents

cant they just do the boomer thing of take a pic of the pic? so "they" just get the ID of the proxy device? also use 7 proxies.

also love waking up from a nap, and my brain thinks "crypto" is the bad buttcoin crypto and not the applied science/maths cryptography .

BabyFur Denny
Mar 18, 2003

Gynovore posted:

I came up with this idea a year ago while stoned: a camcorder that embeds a digital signature in the file, using blockchain so that altering would be impossible.

After thinking about it for a while (and smoking a lot more) I came to the conclusion that, once third parties have the algorithm, there is no way to prevent them from adding a watermark after the fact.
That's not how encryption/digital signatures work. The entire algorithm can (and usually is) public knowledge but that still does not allow anyone else to fake your signature or crack the encryption. They would need your private key for that.

KillHour
Oct 28, 2007


BabyFur Denny posted:

That's not how encryption/digital signatures work. The entire algorithm can (and usually is) public knowledge but that still does not allow anyone else to fake your signature or crack the encryption. They would need your private key for that.

You also don't need to use blockchain for it. Blockchain would just make it needlessly slow. You would need all of your editing software to be digitally signed with a CA though, so you could register the edits and validate exactly what was changed (or I guess you could imbed the original unedited image and signature in the EXIF metadata, but that seems stupid for multiple reasons).

The most practical attack I can think of would require replacing the image sensor with something that emulates it and sends the already faked image to the camera's processing system. You could require the image sensor to authenticate with the rest of the camera to prevent that, but you could still potentially MITM the raw data unless you embedded an encryption circuit directly on the CMOS and only ever send encrypted data from the sensor. Which is theoretically possible, I think, but I doubt it would ever happen.

Edit: I wonder if anyone has ever put a logic circuit on a camera sensor CMOS. The only practical use case I can think of that someone might actually try is some really crazy espionage poo poo.

KillHour fucked around with this message at 05:18 on Jan 10, 2024

Roadie
Jun 30, 2013

BabyFur Denny posted:

That's not how encryption/digital signatures work. The entire algorithm can (and usually is) public knowledge but that still does not allow anyone else to fake your signature or crack the encryption. They would need your private key for that.

The private key has to be in the camera for this work, so somebody will have it cracked and on the internet in about a month after release (maybe sooner).

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Roadie posted:

The private key has to be in the camera for this work, so somebody will have it cracked and on the internet in about a month after release (maybe sooner).

Cryptography chips can be very difficult to crack, there are many on the market that are years old and still good.

But regardless, ripping the key off a chip would allow you to impersonate that original owner. Eg, you steal a journalists camera and copy they key off and you could submit photos to reuters as them. The journalist reports the camera stolen and the key is invalidated by the CA and now useless.

Releasing it on the internet would just get it invalidated instantly.

Mega Comrade fucked around with this message at 09:29 on Jan 10, 2024

Bug Squash
Mar 18, 2009

PhazonLink posted:

cant they just do the boomer thing of take a pic of the pic? so "they" just get the ID of the proxy device? also use 7 proxies.

also love waking up from a nap, and my brain thinks "crypto" is the bad buttcoin crypto and not the applied science/maths cryptography .

This is pretty much the crux of why all this is a very naive discussion. People are imagining super hi-tech attack vectors, but at the end of the day this is going to circumvented by brain-dead tricks that tech enthusiasts are essentially blind to. All you've done is create a single unbreakable link in an extremely weak chain, and now think the whole chain is unbreakable. Nothing is going to give us a perfect guarantee that an image isn't manipulated. But some people are now going to convince themselves they have a solution and get rolled hard by conmen because they think they and their system is too smart to be tricked.

Of course, that's not going to stop some media organisations and camera manufacturers putting out wide-eyed press releases about how the problem is now "solved". That's pure hype cycle, and chances are the organisations aren't even going to use the tech in practice.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Bug Squash posted:

This is pretty much the crux of why all this is a very naive discussion. People are imagining super hi-tech attack vectors, but at the end of the day this is going to circumvented by brain-dead tricks that tech enthusiasts are essentially blind to. All you've done is create a single unbreakable link in an extremely weak chain, and now think the whole chain is unbreakable. Nothing is going to give us a perfect guarantee that an image isn't manipulated. But some people are now going to convince themselves they have a solution and get rolled hard by conmen because they think they and their system is too smart to be tricked.

Of course, that's not going to stop some media organisations and camera manufacturers putting out wide-eyed press releases about how the problem is now "solved". That's pure hype cycle, and chances are the organisations aren't even going to use the tech in practice.

Lol what? Please explain these brain-dead tricks that will fool the spec which clearly you haven't even glanced at.

The ones so far are the journalist is in on the forgery, which this spec isn't trying to solve or the CA is compromised by a state, which has always been a problem.

It allows the NYT to see an image from a journalist of theirs hasn't been manipulated in transit and It will allow you to right click on an image and see "NYT verifies this". It's not out to completely solve AI image manipulation, nothing can.

It does tickle me goons thinking "just take a picture of it" like it's something that the minds at intel, Microsoft and the open cryptography community would never have considered.

Mega Comrade fucked around with this message at 10:46 on Jan 10, 2024

Bug Squash
Mar 18, 2009

Mega Comrade posted:

It allows the NYT to see an image from a journalist of theirs hasn't been manipulated in transit

Does that happen a lot? Surely the secure email does that job.

It's just another solution in search of a problem.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
You don't see the value in being able to click on any image anywhere on the internet and see if a CA validates an image?

Mega Comrade fucked around with this message at 11:19 on Jan 10, 2024

Bug Squash
Mar 18, 2009

Mega Comrade posted:

You don't see the value in being able to click on any image anywhere on the internet and see if a CA validates an image?
I see the danger in turning over critical thinking to a computer.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
I don't follow how seeing a picture on BBC news and believing it's true is a different level of critical thinking then right clicking an image and seeing 'verified by BBC'.

Do you think the padlock symbol that sits in the corner of your browser when you visit your bank site has been a danger to critical thinking? Or has it been a net positive to the security of the internet?
I think most people would agree the latter.

Bug Squash
Mar 18, 2009

No-one was arguing against secure webpages, come on.

An entirely new and expensive infrastructure for the internet, all undone by someone screen capping and sharing that image, reducing real and fake to equal footing. Because that's how these things spread in the real world, rather than the idealised scenarios in tech-bros heads.

There is simply a fundamental misalignment between what you're imagining, and the way that actual humans use the actual internet.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Which is what? You haven't explained it.
Someone screen capping and resharing doesn't invalidated the work done here.

And who are these tech bros you keep referring to?
Tech bros is usually used to refer to tech enthusiast on twitter or startup venture capitalists.
Not Intel, Microsoft and arm.

I used the HTTPS cert example because it's similar to this initiative. It isn't full proof, it's been bested before, is reliant on you trusting the CA and you should still use critical thinking when interacting with any site, but despite these issues the internet is better for it.

Is this going to stop fake news dead in its tracks? No , but your attitude seems to be because it can't completely solve the problem then it's useless which is just daft.

Mega Comrade fucked around with this message at 13:25 on Jan 10, 2024

Kagrenak
Sep 8, 2010

Bug Squash posted:

No-one was arguing against secure webpages, come on.

An entirely new and expensive infrastructure for the internet, all undone by someone screen capping and sharing that image, reducing real and fake to equal footing. Because that's how these things spread in the real world, rather than the idealised scenarios in tech-bros heads.

There is simply a fundamental misalignment between what you're imagining, and the way that actual humans use the actual internet.

This is like arguing there's no point in the NYTimes keeping an archive of original articles because people can create forgeries. You're missing the point of the technology, it isn't trying to magically completely stopping the spread of misinformation, it's just a chain of custody tool for newspaper photo journalism.

Bug Squash posted:

Does that happen a lot? Surely the secure email does that job.

It's just another solution in search of a problem.

This just tells you who the file came from, it doesn't provide a traceable history of what happened to the photo between capture and submission. A signed file would provide such a record.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках
Oh hey, back to pointing out that generative AI is just automated plagiarism: https://www.rollingstone.com/culture/culture-news/ai-generated-george-carlin-comedy-special-1234944553/

quote:

An artificial intelligence-driven comedy special has attempted to resurrect comedy genius George Carlin‘s signature humor, 15 years after he died in 2008 of heart failure. Following the special’s release, the comedian’s daughter responded and said that “No machine will ever replace his genius.”

The hour-long special, titled George Carlin: I’m Glad I’m Dead, is a product of Dudesy, a podcast run by AI and curated by humans. Chad Kultgen and Will Sasso host the podcast and YouTube show, and allow the Dudesy AI to draw from their emails, texts, social media accounts, and even their own work — from Sasso’s performances on MadTV to an old feature script written by Kultgen called, Pizza: The Movie.

Back to our discussion of rights, this is going to be a legal test, as it quite literally steals Carlin's image rights, pulls from his works, and visually and audibly represents itself with him without consent from his estate. Just utterly ethically bankrupt behavior on top of the legal concerns.

Tree Reformat
Apr 2, 2022

by Fluffdaddy

Mega Comrade posted:

I don't follow how seeing a picture on BBC news and believing it's true is a different level of critical thinking then right clicking an image and seeing 'verified by BBC'.

Do you think the padlock symbol that sits in the corner of your browser when you visit your bank site has been a danger to critical thinking? Or has it been a net positive to the security of the internet?
I think most people would agree the latter.

The padlock icon actually has been a danger to critical thinking, because it can and has been abused into help trick people into thinking phising pages are legitimate. As they say, all HTTPS guarantees is that your connection to someone is secure, even if that someone is actually Satan.

This is exactly why browsers have been increasingly de-emphasizing (and now in Chrome, completely removing) the padlock in recent years.

SCheeseman
Apr 23, 2003

Liquid Communism posted:

Oh hey, back to pointing out that generative AI is just automated plagiarism: https://www.rollingstone.com/culture/culture-news/ai-generated-george-carlin-comedy-special-1234944553/

Back to our discussion of rights, this is going to be a legal test, as it quite literally steals Carlin's image rights, pulls from his works, and visually and audibly represents itself with him without consent from his estate. Just utterly ethically bankrupt behavior on top of the legal concerns.

A legal test of what? All of this was possible before AI, using photo manipulation and audio editing software. An AI isn't making or publishing this, two comedians are.

fez_machine
Nov 27, 2004

Liquid Communism posted:

Oh hey, back to pointing out that generative AI is just automated plagiarism: https://www.rollingstone.com/culture/culture-news/ai-generated-george-carlin-comedy-special-1234944553/

Back to our discussion of rights, this is going to be a legal test, as it quite literally steals Carlin's image rights, pulls from his works, and visually and audibly represents itself with him without consent from his estate. Just utterly ethically bankrupt behavior on top of the legal concerns.

You've fallen for the oldest trick in the book because there's probably very little A.I involvement apart from the images (it also carefully doesn't use Carlin's image) and the voice.

It's a comedian laundering their own material by punching it up so it sounds vaguely similar to Carlin and then slapping it in to a text to speech thing generated from an old Carlin special.

There's no proof either way on the amount of A.I. written jokes but here's some commentary
https://twitter.com/adamjohnsonCHI/status/1745430261662183846
https://twitter.com/MichaelToole/status/1745457556137656759
https://twitter.com/animerobin0/status/1745563197745369403

fez_machine fucked around with this message at 01:53 on Jan 12, 2024

KillHour
Oct 28, 2007


It's a great example of both people with no good ideas trying to ride the hype train and people with an agenda jumping on the hate bandwagon. It's just another dumb idea that's getting way too much exposure because everyone is falling over themselves to either praise or condemn AI. At this point, I just want all the hype to die down so we can see what the tech can actually do on its own merits without either the snake oil or the doomsaying.

KillHour fucked around with this message at 04:46 on Jan 12, 2024

Tei
Feb 19, 2011

In a camera's glance, a scene it signs with a dance,
Joe alters the view, a modification to pursue,
Signing once more, ownership to assure.

The camera's click, a moment captured slick,
Joe's adjustment in the mix, spreads where it clicks.

Through the lens, a photo signs, in an 'unbreakable' line,
Joe captures the scene, impossible to glean,
Sharing with glee, where the pixels convene.

John shares a picture with a watermark fixture,
Joe erases the mark, claiming the art with vigor.

Bob invests in defense, high and dense,
Joe's screenshot, intense, online it's dispensed.

https://chat.openai.com/share/9d9a37a4-94c5-4fd2-9ec6-596975369288


Edit:
https://aftermath.site/the-internet-is-full-of-ai-dogshit

Tei fucked around with this message at 12:28 on Jan 12, 2024

Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

This is no biggie, it's just a random schlub imitating his voice and his style of comedy next to AI-generated pictures.

Gynovore fucked around with this message at 18:18 on Jan 12, 2024

moist banana bread
Dec 17, 2023

banana Jake!
Yeah earlier I was gonna make Carlin's face but made of summer sausage as a joke and I kinda found myself thinking "is this ethical?" as "he" looked back at me

and then I remembered it's the captured image of a dead man I'm passing through an overhyped photoshop filter, and that I'm not actually making a summer sausage soul version of the man trapped inside my Dell.

moist banana bread fucked around with this message at 05:42 on Jan 14, 2024

Kavros
May 18, 2011

sleep sleep sleep
fly fly post post
sleep sleep sleep
Finally, I've run across one of the first political trailblazers with the courage and foresight to use AI art to advance their campaign message! And it's everything I could have expected!

https://www.voteliccione.org/post/for-every-child-a-shield-to-every-school-a-dog

MixMasterMalaria
Jul 26, 2007
I was thinking about these LLMs the other day and how ridiculous it is that we don't just spend those billions training the incredible biological computers we already have sitting around. We need a Natural Intelligence movement.

Rogue AI Goddess
May 10, 2012

I enjoy the sight of humans on their knees.
That was a joke... unless..?

MixMasterMalaria posted:

I was thinking about these LLMs the other day and how ridiculous it is that we don't just spend those billions training the incredible biological computers we already have sitting around.
Oh, but we do. Much of standardized test preparation boils down to teaching the student to think like a LLM, and it's a multibillion industry.

golden bubble
Jun 3, 2011

yospos

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

An actually good article from a SciFi writer talking about if generative AI is one of the less bad bubbles that leaves something of use behind, like the DotCom bubble, or one of the more bad bubbles that leaves nothing behind, like crypto.


MixMasterMalaria posted:

I was thinking about these LLMs the other day and how ridiculous it is that we don't just spend those billions training the incredible biological computers we already have sitting around. We need a Natural Intelligence movement.

Because it's hard. There's a reason one-on-one tutoring with a college educated tutor and a textbook has remained the gold standard for education since at least the medieval era. We have had so few technological advancements related to teaching that people are still arguing if paper textbooks are better than digital textbooks, and there's actual evidence for the paper side.

https://www.theguardian.com/lifeandstyle/2024/jan/17/kids-reading-better-paper-vs-screen
https://www.biorxiv.org/content/10.1101/2023.08.30.553693v1

And don't forget what a disappointment digital learning was during COVID for 95% of learners. Online learning is still mostly a technology for people who can teach themselves, AKA almost no kid at the middle school level and still very few kids at the high school level.

Lucid Dream
Feb 4, 2003

That boy ain't right.
ChatGPT isn't as reliable as a textbook or a good human tutor, but it allows for self directed learning 24/7 with very little friction at a fraction of the cost. I presume that before long we'll have LLMs fine tuned for specific subjects, or even a specific grade-level curriculum. The ability to just... ask about a subject, and drill down on the parts that you don't understand is incredibly powerful, and it's the very reason 1-on-1 tutoring is so effective. It can't replace a teacher or school, but if you don't have access to a private 1-on-1 tutor, it's nice to have.

Lucid Dream fucked around with this message at 18:18 on Jan 22, 2024

Abhorrence
Feb 5, 2010

A love that crushes like a mace.

Lucid Dream posted:

ChatGPT isn't as reliable as a textbook or a good human tutor, but it allows for self directed learning 24/7 with very little friction at a fraction of the cost. I presume that before long we'll have LLMs fine tuned for specific subjects, or even a specific grade-level curriculum. The ability to just... ask about a subject, and drill down on the parts that you don't understand is incredibly powerful, and it's the very reason 1-on-1 tutoring is so effective. It can't replace a teacher or school, but if you don't have access to a private 1-on-1 tutor, it's nice to have.

The problem is you don't know if what you're learning is accurate at all, or a ChatGPT hallucination.

KillHour
Oct 28, 2007


Abhorrence posted:

The problem is you don't know if what you're learning is accurate at all, or a ChatGPT hallucination.

This is starting to get better - Bing, for example, will now give a list of sources with the answer. I'm under NDA about the details for work stuff, but at a high level I can say that the way the industry is going is to use NLP/LLM to turn the question into a search that can be executed, and then use the LLM to summarize the search responses and insert citations where necessary. This has the extra benefit of being able to answer questions with material too new to be in the training data.

It doesn't guarantee that the result will be accurate or that the LLM won't make a mistake summarizing, but it helps a lot with hallucinations just making things up whole-cloth.

Edit: It also follows the very predictable pattern of new technologies making the transition from "This is revolutionary and will replace everything!" to "Okay, this has some strengths and some weaknesses, so let's see where we can fit it into the existing tech stack to enhance already-proven methods."

Double edit: If you want to learn more, the technique is called RAG (Retrieval Augmented Generation) and it's the new hotness everyone can't shut up about, which reminds me I need to add it to my LinkedIn keywords...
https://medium.com/artificial-corner/retrieval-augmented-generation-rag-a-short-introduction-21d0044d65ff

KillHour fucked around with this message at 19:01 on Jan 22, 2024

Lucid Dream
Feb 4, 2003

That boy ain't right.

Abhorrence posted:

The problem is you don't know if what you're learning is accurate at all, or a ChatGPT hallucination.

Well, this is partially why I brought up the fine tuned models for education because it would drastically reduce hallucinations, but honestly GPT4 doesn't really hallucinate that much these days. As long as you're not asking it something that can be interpreted as a request for it to produce creative output (stories, poems, etc) it does a pretty good job of just saying it doesn't know or can't say. I'm not saying there aren't downsides, but I also think you'd be hard pressed to come up with a reasonable educational related question to ask ChatGPT4 that it would get wrong. It's getting harder and harder to come up with a contrived example of a question that fails with with chatgpt, let alone good-faith questions related to established historical events, mathematical concepts, etc.

Again, not suggesting that you can replace teachers with AI (we should tax the hell out of AI and use the money for all sorts of stuff, including the real human-run education system), but I'm also a strong believer in reducing the friction required to let people engage in self directed learning. The internet was a big step in that direction, with similar and familiar pitfalls, but I sure wouldn't want to try and learn things the old way even though sometimes websites are wrong.

Lucid Dream fucked around with this message at 20:11 on Jan 22, 2024

Lemming
Apr 21, 2008

KillHour posted:

This is starting to get better - Bing, for example, will now give a list of sources with the answer. I'm under NDA about the details for work stuff, but at a high level I can say that the way the industry is going is to use NLP/LLM to turn the question into a search that can be executed, and then use the LLM to summarize the search responses and insert citations where necessary. This has the extra benefit of being able to answer questions with material too new to be in the training data.

It doesn't guarantee that the result will be accurate or that the LLM won't make a mistake summarizing, but it helps a lot with hallucinations just making things up whole-cloth.

Edit: It also follows the very predictable pattern of new technologies making the transition from "This is revolutionary and will replace everything!" to "Okay, this has some strengths and some weaknesses, so let's see where we can fit it into the existing tech stack to enhance already-proven methods."

Double edit: If you want to learn more, the technique is called RAG (Retrieval Augmented Generation) and it's the new hotness everyone can't shut up about, which reminds me I need to add it to my LinkedIn keywords...
https://medium.com/artificial-corner/retrieval-augmented-generation-rag-a-short-introduction-21d0044d65ff

Reframing the value of LLM as being a much more capable and straightforward way of searching for information is a both a more accurate and clearly useful concept than "AI" for what it's doing. A minor, dumb example, but I was wondering what an old tank game I played was and after a few iterations of telling ChatGPT why its suggestion was wrong and what I remember that was different, it got the right answer. I feel like this is what's going on in most of the cases that people hype up as "oh look it can code!" Well, no, but it can help you get to a solution that someone has already made for a small use case, so you can learn from that and understand it more quickly.

Obviously this is my layman's interpretation of the value of what you're saying but this kind of thing doesn't make me roll my eyes like most of the AI hype stuff

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Abhorrence posted:

The problem is you don't know if what you're learning is accurate at all, or a ChatGPT hallucination.

This is true of traditional education as well, and extremely true of trying to learn poo poo through google search or the classic normal way to learn things - hearing about them from your peers.

This reminds me of the AI driving criticisms that almost made no sense, the "it will never be perfect, it can make mistakes" kind, where the underlying premise is that anything less than perfect cant be an improvement over the status quo.

(I still think the AI stuff is bad, just that this aint why, and even if this was fixed it wouldnt be better)

KillHour
Oct 28, 2007


Lemming posted:

Reframing the value of LLM as being a much more capable and straightforward way of searching for information is a both a more accurate and clearly useful concept than "AI" for what it's doing. A minor, dumb example, but I was wondering what an old tank game I played was and after a few iterations of telling ChatGPT why its suggestion was wrong and what I remember that was different, it got the right answer. I feel like this is what's going on in most of the cases that people hype up as "oh look it can code!" Well, no, but it can help you get to a solution that someone has already made for a small use case, so you can learn from that and understand it more quickly.

Obviously this is my layman's interpretation of the value of what you're saying but this kind of thing doesn't make me roll my eyes like most of the AI hype stuff

I think you're thinking of RAG a bit backwards. This is very high level, but it would work kind of like this:

pre:
[user input]
"I'm trying to remember the name of an old game.  It had tanks and was in first person, but had RTS elements and you could build stuff.
I think it might have been on the moon or mars or something?"

[gets run through NLP Term Extraction]
"old game; tanks; first person; rts; build stuff; moon; mars"

[is enhanced by Term Expansion (generally knowledge graph driven)]
"retro game; FPS; Real Time Strategy; base builder; space"

[is used to generate search results]
Anyone have an updated rts / fps hybrid list?
Reddit · r/RealTimeStrategy
30+ comments · 2 years ago

List of real-time strategy video games
Wikipedia
https://en.wikipedia.org › wiki › List_of_real-time_st...

The 28 Most Niche Simulation PC Games We Could Find
PC Magazine
https://www.pcmag.com › ... › Games › PC Games

...

[are fed into LLM along with a contextualized prompt]
"Given the following question [original user question], summarize the most relevant information from [search result snippets]"

[LLM responds with]
"Here are some games I found that match what you are looking for:
- Command & Conquer: Renegade (2002)
- Battlezone (1998)
- Tribes (1998)
- Uprising: Join or Die (1997)"
The main thing the LLM is doing is the busywork of skimming the Google search results, and possibly helping to construct the search terms.

KillHour fucked around with this message at 21:50 on Jan 22, 2024

Rappaport
Oct 2, 2013

I was in a teacher seminar about six months ago, not sure exactly how long ago so maybe it was a super old chatgpt, and we did some live experiments with what it knew and what it didn't. I'm a STEM-lord, so my questions were "what is Isaac Newton famous for in physics", perfectly acceptable short answer, and "what did Isaac Newton do related to coinage" and the robot very confidently told us a story about how Newton used coins for physics demonstrations. The latter part may be true, but Newton worked for the Bank of England as head coin master for decades, trying to outsmart counterfeiters and the like. Obviously Newton is more famous for the apple and being a gigantic goony weirdo, but his career at the bank is relatively well documented and would not IMO be an obscure fact about his biography. I can't recall what the history teachers asked it, but it was kinda hit and miss too.

The ideal human teacher knows relatively well what their core competencies involve, at least with adult teaching. If the newer iterations of AI do actual sourcing and the like, it's certainly an improvement, but I would be a bit skeptical about just using it for independent study, especially for a new subject. It definitely has valid uses in education already, but I would look at it more like a robot that's good for doing ultimately pretty mindless work with an efficiency and speed a human being couldn't.

reignonyourparade
Nov 15, 2012
I imagine you were doing stuff as a single session, so i'm moderately curious whether the answer would've been different if you hadn't asked the physics question first, since most of them use their own side of the conversation as part of the input prompt as well. It very well might not have been different, just curious

Rappaport
Oct 2, 2013

Yeah it was a couple of hours of workshopping. I'm 99% sure I asked my questions in that specific order, because I assumed the first one was a gimme and the second at least slightly more tricky. It's interesting to hear that I maybe deceived the poor robot into gibberish :ohdear:

Mola Yam
Jun 18, 2004

Kali Ma Shakti de!
So I think what's happening there is related to the thing where most people go punch a question into ChatGPT and think, understandably, "I'm talking directly to an AI!"

When really, there's another layer there - a hidden metaprompt between the user and the big LLM blob, which primes the LLM how to react. And if you're using custom GPTs on top of ChatGPT, and then adding contextual information into your prompt as to how you'd like it to respond, that's another couple of layers of metaprompting between you and the "raw" LLM, each of which can very strongly shape the output.

As an extreme example, you can construct a kind of "wrong answers only" metaprompt pretty easily, and then get wildly incorrect information out of it.

But at the interface level, these metaprompts are getting updated and tweaked all the time by OpenAI or Microsoft or whoever; either to plug exploits or hide weaknesses or increase the quality of the output. That's why even though Bing and ChatGPT and Github Copilot use the same underlying model, the experience of interacting with them can be so different, and the answers can vary a lot.

So yeah, I think some combination of it being an earlier model (probably GPT-3 if it was a while ago), plus priming the prompt with physics talk, meant that you got bad answers for subsequent non-physics questions.

FWIW, I just tried that exact input ("what did Isaac Newton do related to coinage") in GPT-3.5, GPT-4 and Bing, and got detailed, perfectly correct answers for all three, with Bing even throwing in citation links to sources. So things are still improving quickly; don't ossify your view of AI in the "look at those hosed up fingers" era, because we're well past that already.

TheBlackVegetable
Oct 29, 2006
Part of the issue is surely that by default the system is general purpose conversational - I don't think answering like a know it all bullshit artist is terribly surprising given the training material included things like Reddit.

My understanding is setting the context, and giving it access to a specific data set to focus on / search through when constructing answers goes a long way to getting the style you want out of it.

Without that, it's like you're just jumping in front of random people on the street and asking them questions that they feel obliged to answer in some way, even if they don't know at all.

Adbot
ADBOT LOVES YOU

Serotoning
Sep 14, 2010

D&D: HASBARA SQUAD
HANG 'EM HIGH


We're fighting human animals and we act accordingly

golden bubble posted:

Because it's hard. There's a reason one-on-one tutoring with a college educated tutor and a textbook has remained the gold standard for education since at least the medieval era. We have had so few technological advancements related to teaching that people are still arguing if paper textbooks are better than digital textbooks, and there's actual evidence for the paper side.

This is why education needs to be privatized ASAP. There's not enough incentive for education to change, or at least to change fast enough, to meet the demands of the modern world. A profit incentive applied to teaching would cause a rapid revolution in how we teach kids and prepare them for the world.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply