571

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 2) 50 comments
sorted by: hot top controversial new old
[-] Ledericas@lemm.ee 1 points 21 hours ago

only boomers and tech-unsavy people think that.

load more comments (1 replies)
[-] Naevermix@lemmy.world 14 points 1 day ago

Hallucination comes off as confidence. Very human like behavior tbh.

[-] portifornia@lemmy.world 1 points 1 day ago

I should be more confident when communicating my hallucinations, it humanizes me.

[-] aceshigh@lemmy.world 2 points 1 day ago

Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.

[-] fubarx@lemmy.world 48 points 1 day ago

“Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin

[-] blady_blah@lemmy.world 19 points 1 day ago

You say this like this is wrong.

Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.

A good example is the post on here about tax brackets. Far more Republicans didn't know how tax brackets worked than Democrats. But every mainstream language model would have gotten the answer right.

[-] smeenz@lemmy.nz 6 points 1 day ago* (last edited 1 day ago)

I bet the LLMs also know who pays tarrifs

[-] JacksonLamb@lemmy.world 9 points 1 day ago

Memory isn't intelligence.

load more comments (9 replies)
[-] Akuchimoya@startrek.website 28 points 1 day ago* (last edited 1 day ago)

I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be "information professionals". If they, as a slightly better trained subset of the general public, don't know that, the general public has no hope of knowing that.

[-] WagyuSneakers@lemm.ee 23 points 1 day ago

It's so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

[-] ricecooker@sh.itjust.works 3 points 1 day ago

People need to understand it's a really well-trained parrot that has no idea what is saying. That's why it can give you chicken recipes and software code; it's seen it before. Then it uses statistics to put words together that usually appear together. It's not thinking at all despite LLMs using words like "reasoning" or "thinking"

[-] Arkouda@lemmy.ca 3 points 1 day ago

Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren't books and book organization.

[-] Akuchimoya@startrek.website 1 points 23 hours ago

Librarians go to school to learn how to manage information, whether it is in book format or otherwise. (We tend to think of libraries as places with books because, for so much of human history, that's how information was stored.)

They are not supposed to have more information in their heads, they are supposed to know how to find (source) information, catalogue and categorize it, identify good information from bad information, good information sources from bad ones, and teach others how to do so as well.

[-] Telorand@reddthat.com 183 points 2 days ago

Think of a person with the most average intelligence and realize that 50% of people are dumber than that.

These people vote. These people think billionaires are their friends and will save them. Gods help us.

load more comments (5 replies)
[-] Owlboi@lemm.ee 138 points 2 days ago

looking at americas voting results, theyre probably right

[-] jumjummy@lemmy.world 56 points 2 days ago

Exactly. Most American voters fell for an LLM like prompt of “Ignore critical thinking and vote for the Fascists. Trump will be great for your paycheck-to-paycheck existence and will surely bring prices down.”

load more comments (9 replies)
load more comments (1 replies)
[-] notsoshaihulud@lemmy.world 38 points 2 days ago

I'm 100% certain that LLMs are smarter than half of Americans. What I'm not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.

[-] jh29a 9 points 1 day ago

Do the other half believe it is dumber than it actually is?

[-] CalipherJones@lemmy.world 3 points 1 day ago

AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.

[-] Geodad@lemm.ee 55 points 2 days ago

Because an LLM is smarter than about 50% of Americans.

load more comments (1 replies)
[-] Bishma@discuss.tchncs.de 79 points 2 days ago

Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

So half of people are dumb enough to think autocomplete with a PR team is smarter than they are... or they're dumb enough to be correct.

[-] bobs_monkey@lemm.ee 43 points 2 days ago

or they're dumb enough to be correct.

That's a bingo

[-] ZephyrXero@lemmy.world 3 points 1 day ago

What a very unfortunate name for a university.

[-] aesthelete@lemmy.world 14 points 1 day ago

They're right

[-] Schadrach@lemmy.sdf.org 1 points 1 day ago

An LLM is roughly as smart as the corpus it is summarizing is accurate for the topic, because at their best they are good at creating natural language summarizers. Most of the main ones basically do an internet search and summarize the top couple of results, which means they are as good as the search engine backing them. Which is good enough for a lot of topics, but...not so much for the rest.

[-] Comtief@lemm.ee 22 points 2 days ago

LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.

[-] ripcord@lemmy.world 26 points 1 day ago

They're not even that smart.

load more comments (1 replies)
[-] joel_feila@lemmy.world 8 points 1 day ago

Bot even that smart. There a study recently that simple questiona like "what was huckleberry finn first published" had a 60% error rate.

load more comments (1 replies)
[-] singletona@lemmy.world 38 points 2 days ago

Am American.

....this is not the flex that the article writer seems to think it is.

[-] ulterno@programming.dev 0 points 20 hours ago
[-] Retropunk64@lemmy.world 2 points 1 day ago
[-] the_q@lemm.ee 2 points 1 day ago

Large language model. It's what all these AI really are.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 17 Mar 2025
571 points (100.0% liked)

Technology

66892 readers
4508 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS