Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes.
Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
And you know what? The people who believe that are right.
Note that that’s not a commentary on the capabilities of LLMs.
It's sad, but the old saying from George Carlin something along the lines of, "just think of how stupid the average person is, and then realize that 50% are even worse..."
They are right when it comes to understanding LLMs the LLM definitely understands LLMs better than they do. I'm sure an AI could have a perfect IQ test. But has a really hard time drawing a completely full glass of wine. Or telling me how many R's are in the word strawberry. Both things a child could do.