514
We have to stop ignoring AI’s hallucination problem
(www.theverge.com)
This is a most excellent place for technology news and articles.
Let's take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they're wrong, yes, but they are not trained in language until they've already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.
An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.
It's still not learning anything. LLMs have what's known as a context window that is used to augment the model for a given session. It's still just text that is used as part of the response process.
You seem to have ignored the preceding sentence: "LLMs are sophisticated word generators." This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it's a part of. There is no thinking or understanding whatsoever.
This is why Voroxpete@sh.itjust.works said in the original post to this thread, "They hallucinate all answers. Some of those answers will happen to be right." LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don't know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.
But this is a deliberate decision, not an inherent limitation. The model could get feedback from the outside world, in fact this is how it's trained (well, data is fed back into the model to update it). Of course we are limiting it to words, rather than a whole slew of inputs that a human gets. But keep in mind we have things like music and image generation AI as well. So it's not like it can't be also be trained on these things. Again, deliberate decision rather than inherent limitation.
We both even agree it's true that it can learn from interacting with the world, you just insist that because it isn't persisting, that doesn't actually count. But it does persist, just not the the new inputs from users. And this is done deliberately to protect the models from what would inevitably happen. That being said, it's also been fed arguably more input than a human would get in their whole life, just condescended into a much smaller period of time. So if it's "total input" then the AI is going to win, hands down.
I'm not ignoring this. I understand that it's the whole argument, it gets repeated around here enough. Just saying it doesn't make it true, however. It may be true, again I'm not sure, but simply stating and saying "full stop" doesn't amount to a convincing argument.
It's not as open and shut as you wish it to be. If anyone is ignoring anything here, it's you ignoring the fact that it went from basically just, as you said, randomly stacking objects it was told to stack stably, to actually doing so in a way that could work and describing why you would do it that way. Additionally there is another case where they asked chat gpt4 to draw a unicorn using an obscure programming language. And you know what? It did it. It was rudimentary, but it was clearly a unicorn. This is something that wasn't trained on images at all. They even messed with the code, turning the unicorn around, removing the horn, fed it back in, and then asked it to replace the horn, and it put it back on correctly. It seemed to understand not only what an unicorn looked like, but what was the horn and where it should go when it was removed.
So to say it just can "generate more words" is something you can accuse us of as well, or possibly even just overly reductive of what it's capable of even now.
There are all kinds of problems with human memory, where we imagine things all of the time. You've ever taken acid? If so, you would see how unreliable our brains are at always interpreting reality. And you want to really trip? Eye witness testimony is basically garbage. I exaggerate a bit, but there are so many flaws with it with people remembering things that didn't happen, and it's so easy to create false memories, that it's not as convincing as it should be. Hell, it can even be harmful by convicting an innocent person.
Every short coming you've used to claim AI isn't real thinking is something shared with us. It might just be inherent to intelligence to be wrong sometimes.