249
you are viewing a single comment's thread
view the rest of the comments
[-] HairyHarry@lemmy.world 10 points 2 days ago

Ok, technically you are correct. Still they are lies or let's call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.

[-] WhatAmLemmy@lemmy.world 3 points 2 days ago* (last edited 2 days ago)

What you're calling lies are false positives. To lie you have to know the truth. AI's are ignorant. They don't know what anything is, as all they "know" is mathematical patterns in 1's and 0's.

They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM's are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don't care, because their only goal is profit and power.

[-] hesh@quokk.au 3 points 2 days ago

If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.

[-] supamanc@lemmy.world 1 points 2 days ago

If a newspaper prints lies you don't say the physical piece of pulped up tree you are holding is lying to you, you say the author is.

[-] hesh@quokk.au 1 points 2 days ago

If it's shown to the newspaper that they are lies and they keep on printing them, then yes I do call them liars as well. Whatever you want to call it, you must admit they are culpable for spreading disinformation.

[-] supamanc@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

No, you are proving my point here. You say 'they' as in the publishers/owners/printers of the newspaper. You don't blame 'it' the literal, physical piece of paper you are holding in your hands.

In the same way that you would not say a clock was lying to you if it displays the wrong time.

[-] hesh@quokk.au 1 points 2 days ago* (last edited 2 days ago)

OK, so I don't blame the GPUs crunching out the LLM lies, or the HTML on the page, I blame Google the company that programmed them.

[-] supamanc@lemmy.world 1 points 2 days ago

The point is, the LLM is not 'lying' to you. It's showing you information. It doesn't 'know' whether the information is true or not. It also doesn't 'care'. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said "technically, it's not lying, because lying requires intent to deceive, and LLMs don't have intent"

[-] hesh@quokk.au 1 points 2 days ago

What's the point of making this semantic difference though?

[-] supamanc@lemmy.world 1 points 1 day ago

Because 1) it's true and the article is a bit misleading as to whom is actually doing the lying and 2) it's important to remember that LLM are not sentient and to push back against the tide of language which subtly suggests they are.

this post was submitted on 08 Apr 2026
249 points (100.0% liked)

Technology

83667 readers
3266 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS