|
porfiria posted:Yeah but it also knows you can't build a real horse out of cabbage (but you can build a statue of one), that horses can't drive because they aren't smart enough and don't have hands, and so on. All this stuff may just be weighted values in a huge matrix or whatever, but it can build associations that are vastly more extensive and subtler than words just tending to appear near other words in the training data. It doesn't know you can't build a real horse out of cabbage, that's the exact reason why hallucinations are a thing. The current generators infamously have problems consistently answering 1+1 = ? correctly, and that's the most obvious sign that pattern-matching is categorically different from reasoning: you can't add processing power or more examples of math and be able to be 100% sure that it won't answer fundamental problems like that wrong. You can be 99% sure, but then that 1% of the time comes up in an engineering spec, or a legal contract, and suddenly the theoretical difference between those concepts is painfully clear. It's absolutely possible to create learning systems that don't gently caress this up - a system that uses operations as the base content of its model rather than straight data would be perfect for faking 'reasoning' - but it still wouldn't be truly capable of it or creativity since it'd be limited to only the operations we taught its model. Even then, it'd be a very different critter, limited to questions about that specific field of operations rather than the 'jam these two genres together/hallucinate infinite detail' that GPT can do. Good Dumplings fucked around with this message at 02:32 on Mar 28, 2023 |
# ¿ Mar 28, 2023 02:23 |
|
|
# ¿ May 10, 2024 02:48 |