595
all 31 comments
sorted by: hot top controversial new old
[-] SaraTonin@lemmy.world 34 points 2 days ago

There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.

And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.

[-] Axolotl_cpp@feddit.it 4 points 1 day ago

I hear a lot of people say "let's ask chatGPT" like the AI is god and know everthing 🙏, that's a big problem to be honest

[-] Yerbouti@sh.itjust.works 19 points 2 days ago

I dont understand the use people make of AI. I know a lot of of professionnal composer who are like "That's awesome, AI does the music for me now!" and I'm like, cool, now you only have the boring part of the job to do since the fun part was made by AI. Creating the music is litteraly the only fun part, I hate everything around it.

[-] balsoft@lemmy.ml 7 points 2 days ago

It's a word predictor. It is good at simple text processing. Think local code refactoring, changing the style or structure of a small text piece, or summarizing small text pieces into even smaller text pieces. It is ok at synthesizing new text that has similar structure to the training corpus. Think generating repetitive boilerplate or copywriting. It is very bad at recalling or checking facts, logic, mathematics, and everything else that people seem to be using it for nowadays.

[-] Amir@lemmy.ml 1 points 2 days ago

The AI creating music is not an LLM

[-] balsoft@lemmy.ml 1 points 2 days ago

Ah, sorry, missed the context

[-] oplkill@lemmy.world 7 points 1 day ago

Replace CEOs by AI

[-] HugeNerd@lemmy.ca 3 points 1 day ago

wrinkle: AI used for this study

[-] NotMyOldRedditName@lemmy.world 9 points 2 days ago

I've had someone else's AI summarize some content I created elsewhere, and it got it incredibly wrong to the point of changing the entire meaning of my original content.

[-] morrowind@lemmy.ml 8 points 2 days ago

"misrepresent" is a vague term. Actual graph from the study

The main issue is usual.. sources. AI is bad at sources without a proper pipeline. They note that Gemini is the worst at 72%.

Note, they're not testing models with their own pipeline. They're testing other people's products. This is more indicative of the product design than the actual models

[-] davidagain@lemmy.world 6 points 1 day ago

This graph clearly shows that AI is also shockingly bad at factual accuracy and at telling a news story in such a way that someone who didn't already know about it to understand the issues and context. I think you're misrepresenting this graph as being bad about sources, but here's a better summary of the point you seem to be making:

AI's summaries don't match their source data.

So actually, the headline is pretty accurate in calling it misrepresentation.

[-] Kissaki@feddit.org 13 points 2 days ago

Will they change their disclaimer now, from "can be wrong" to "is often wrong"? /s

[-] MonkderVierte@lemmy.zip 14 points 2 days ago

Parrot is wrong almost half of the time. Who knew?

[-] paraphrand@lemmy.world 14 points 2 days ago

Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”

[-] Treczoks@lemmy.world 4 points 2 days ago

Like the average American with an 8th grade reading comprehension.

[-] snooggums@piefed.world 4 points 2 days ago

Which is what they used for the training data.

[-] FaceDeer@fedia.io 4 points 2 days ago

So it's about on par with humans, then.

[-] moistclump@lemmy.world 10 points 2 days ago

And then I wonder how frequently humans misinterpret the mistranslated news.

[-] snooggums@piefed.world 10 points 2 days ago

Humans do it often, but they don't have billions of dollars funding their responses.

[-] Treczoks@lemmy.world 7 points 2 days ago

Worse: One third of adult actually believe the shit the AI produces.

[-] AnUnusualRelic@lemmy.world 6 points 2 days ago

Yet the LLM seems to be what everyone is pushing, because it will supposedly get better. Haven't we reached the limits of this model and shouldn't other types of engines be tried?

[-] floofloof@lemmy.ca 4 points 2 days ago

shouldn’t other types of engines be tried?

Sure, but the tricky bit is to be more specific than that.

[-] AnUnusualRelic@lemmy.world 3 points 2 days ago

Well, you know...

"Waves vaguely"

[-] Korkki@lemmy.ml 5 points 2 days ago

The info-sphere today is already a highly delusional place and news can be often contradictory, even from day to day especially by outlets like BCC who is more focused on setting global narratives, not being a reporter of facts as best understood at the moment. No wonder AI would be confused, most readers are confused when navigating every statement made by experts or anonymous officials on every subject. Seems like this study really measured an AI models ability to vomit out the same text in different words and avoiding using any outside context be it accurate or hallucination.

[-] floofloof@lemmy.ca 2 points 2 days ago
[-] Jhex@lemmy.world 3 points 2 days ago

buT AI iS hERe tO StAY

[-] sirico@feddit.uk 3 points 2 days ago

So less of a percentage than the readers and mass media

this post was submitted on 23 Oct 2025
595 points (100.0% liked)

Technology

76339 readers
1310 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS