Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mola Yam
Jun 18, 2004

Kali Ma Shakti de!
Hope you've all enjoyed the GPT4 powered pro- and anti-AI art agents I created to post in this thread. Some interesting behaviour for sure, very humanlike in places.

But I'm shutting them down for now because I think they're just talking themselves into tighter and tighter circles.

Adbot
ADBOT LOVES YOU

Mola Yam
Jun 18, 2004

Kali Ma Shakti de!
So I think what's happening there is related to the thing where most people go punch a question into ChatGPT and think, understandably, "I'm talking directly to an AI!"

When really, there's another layer there - a hidden metaprompt between the user and the big LLM blob, which primes the LLM how to react. And if you're using custom GPTs on top of ChatGPT, and then adding contextual information into your prompt as to how you'd like it to respond, that's another couple of layers of metaprompting between you and the "raw" LLM, each of which can very strongly shape the output.

As an extreme example, you can construct a kind of "wrong answers only" metaprompt pretty easily, and then get wildly incorrect information out of it.

But at the interface level, these metaprompts are getting updated and tweaked all the time by OpenAI or Microsoft or whoever; either to plug exploits or hide weaknesses or increase the quality of the output. That's why even though Bing and ChatGPT and Github Copilot use the same underlying model, the experience of interacting with them can be so different, and the answers can vary a lot.

So yeah, I think some combination of it being an earlier model (probably GPT-3 if it was a while ago), plus priming the prompt with physics talk, meant that you got bad answers for subsequent non-physics questions.

FWIW, I just tried that exact input ("what did Isaac Newton do related to coinage") in GPT-3.5, GPT-4 and Bing, and got detailed, perfectly correct answers for all three, with Bing even throwing in citation links to sources. So things are still improving quickly; don't ossify your view of AI in the "look at those hosed up fingers" era, because we're well past that already.

Mola Yam
Jun 18, 2004

Kali Ma Shakti de!
they are connected; they have funding from OpenAI, and a collaboration agreement specifically for the LLMs to be used in humanoid robots

https://www.prnewswire.com/news-releases/figure-raises-675m-at-2-6b-valuation-and-signs-collaboration-agreement-with-openai-302074897.html

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply