To be fair, as a human, I don’t feel any different.
The y key difference is humans are aware of what they know and don't know and when they're unsure of an answer. We haven't cracked that for AIs yet.
When AIs do say they're unsure, that's their understanding of the problem, not an awareness of their own knowledge
They hey difference is humans are aware of what they know and don't know
If this were true, the world would be a far far far better place.
Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
I'm not saying humans are always aware of when they're correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info.
LLMs aren't aware of anything like self confidence
Just post something 💛
To be fair, as a human, I don’t feel any different.
The y key difference is humans are aware of what they know and don't know and when they're unsure of an answer. We haven't cracked that for AIs yet.
When AIs do say they're unsure, that's their understanding of the problem, not an awareness of their own knowledge
If this were true, the world would be a far far far better place.
Humans gobble up all sorts of nonsense because they “learnt” it. Same for LLMs.
I'm not saying humans are always aware of when they're correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info.
LLMs aren't aware of anything like self confidence