Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Good Dumplings
Mar 30, 2011

Excuse my worthless shitposting because all I can ever hope to accomplish in life is to rot away the braincells of strangers on the internet with my irredeemable brainworms.

porfiria posted:

Yeah but it also knows you can't build a real horse out of cabbage (but you can build a statue of one), that horses can't drive because they aren't smart enough and don't have hands, and so on. All this stuff may just be weighted values in a huge matrix or whatever, but it can build associations that are vastly more extensive and subtler than words just tending to appear near other words in the training data.

You edited your response a bit. So just to expand:

I'd say it does "know" what a horse is, but that's for some definition of "know." It doesn't have any kind of audio or visual model for a horse (although it probably will soon, so it's kind of a moot point). And of course it doesn't have any personal, subjective associations with a horse in the way that a human does.

But as a matter of language, I'd say yeah, it can deploy "horse" correctly, and "knows" just about all the facts about horses there are, and how those facts inter-relate to other facts about the world in a comprehensive way that, to my mind, meets a lot of the criteria for "knowing" something.

It doesn't know you can't build a real horse out of cabbage, that's the exact reason why hallucinations are a thing.

The current generators infamously have problems consistently answering 1+1 = ? correctly, and that's the most obvious sign that pattern-matching is categorically different from reasoning: you can't add processing power or more examples of math and be able to be 100% sure that it won't answer fundamental problems like that wrong. You can be 99% sure, but then that 1% of the time comes up in an engineering spec, or a legal contract, and suddenly the theoretical difference between those concepts is painfully clear.

It's absolutely possible to create learning systems that don't gently caress this up - a system that uses operations as the base content of its model rather than straight data would be perfect for faking 'reasoning' - but it still wouldn't be truly capable of it or creativity since it'd be limited to only the operations we taught its model. Even then, it'd be a very different critter, limited to questions about that specific field of operations rather than the 'jam these two genres together/hallucinate infinite detail' that GPT can do.

Good Dumplings fucked around with this message at 02:32 on Mar 28, 2023

Adbot
ADBOT LOVES YOU

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply