Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
goferchan
Feb 8, 2004

It's 2006. I am taking 276 yeti furs from the goodies hoard.

Good Dumplings posted:

It doesn't know you can't build a real horse out of cabbage, that's the exact reason why hallucinations are a thing.

The current generators infamously have problems consistently answering 1+1 = ? correctly, and that's the most obvious sign that pattern-matching is categorically different from reasoning: you can't add processing power or more examples of math and be able to be 100% sure that it won't answer fundamental problems like that wrong. You can be 99% sure, but then that 1% of the time comes up in an engineering spec, or a legal contract, and suddenly the theoretical difference between those concepts is painfully clear.

It's absolutely possible to create learning systems that don't gently caress this up - a system that uses operations as the base content of its model rather than straight data would be perfect for faking 'reasoning' - but it still wouldn't be truly capable of it or creativity since it'd be limited to only the operations we taught its model. Even then, it'd be a very different critter, limited to questions about that specific field of operations rather than the 'jam these two genres together/hallucinate infinite detail' that GPT can do.

As an AI language model, its understanding of horses is based on the textual data and patterns it has been trained on. It does not possess personal experiences, emotions, or a physical presence, so its understanding is limited to the information it has been exposed to during its training.

In that context, its understanding of horses is derived from the descriptions, facts, and relationships between concepts that are found within the text data it has been trained on. It can provide information about horses, answer questions related to them, and discuss various aspects of horses based on that information, but its understanding is ultimately rooted in language and text, rather than personal experience or direct perception.

Adbot
ADBOT LOVES YOU

goferchan
Feb 8, 2004

It's 2006. I am taking 276 yeti furs from the goodies hoard.

reignonyourparade posted:

This is true but also the same could be said about a not negligible amount of humans when it comes to horses.

You are not wrong, but honestly that post you're replying to was just a copy-paste of ChatGPT4's response when I asked it to describe how it understood horses in the 3rd person

goferchan
Feb 8, 2004

It's 2006. I am taking 276 yeti furs from the goodies hoard.

Count Roland posted:

I just didn't understand how useful it was. I knew it was very good at writing code, for example. But I didn't know it could help a writer so effectively with the creative process. I've only just started to play it myself so I've still much to learn.

Aramis posted:

It's actually the other way around: It's surprisingly good at writing code, but it's still pretty bad at anything non-trivial. It really shines at writing/editing/summarising tasks. Giving it a paragraph and asking it to fluff it up, or crunch it down to its minimal essence works almost all of the time.

Yeah it's very useful at things that take a little bit of brainpower that you could definitely do but just don't feel like doing. "Make this more concise" or "rephrase this in the style of a business email" or whatever typically produce very acceptable results.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply