247
you are viewing a single comment's thread
view the rest of the comments
[-] HairyHarry@lemmy.world 20 points 2 days ago

You can give them intentions. Grok is not a Nazi by Magic.

[-] Dojan@pawb.social 23 points 2 days ago

Don’t know if I’d call that an intention of the machine but rather the creator. Hate to be that kind of person but it’s similar to the whole thing of “guns don’t kill, people do.”

LLMs aren’t people. They’re not self-aware and don’t have any inner complexities like say, a dog, or a sheep has. There’s no drive or motivation. It’s just maths.

If you tie someone to a train track, and a train comes along killing them, it’s not like the train or the track intended to kill the person. That was the intent of you, who “programmed” the scenario.

Similar to guns, strict control is what will be needed to fix these kinds of things. Megalomaniac billionaires who see people as nothing but numbers running amok with narcissistic manipulator systems isn’t a recipe for anything good.

[-] HairyHarry@lemmy.world 10 points 2 days ago

Ok, technically you are correct. Still they are lies or let's call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.

[-] WhatAmLemmy@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

What you're calling lies are false positives. To lie you have to know the truth. AI's are ignorant. They don't know what anything is, as all they "know" is mathematical patterns in 1's and 0's.

They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM's are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don't care, because their only goal is profit and power.

[-] hesh@quokk.au 3 points 1 day ago

If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.

[-] supamanc@lemmy.world 1 points 1 day ago

If a newspaper prints lies you don't say the physical piece of pulped up tree you are holding is lying to you, you say the author is.

[-] hesh@quokk.au 1 points 1 day ago

If it's shown to the newspaper that they are lies and they keep on printing them, then yes I do call them liars as well. Whatever you want to call it, you must admit they are culpable for spreading disinformation.

[-] supamanc@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

No, you are proving my point here. You say 'they' as in the publishers/owners/printers of the newspaper. You don't blame 'it' the literal, physical piece of paper you are holding in your hands.

In the same way that you would not say a clock was lying to you if it displays the wrong time.

[-] hesh@quokk.au 1 points 1 day ago* (last edited 1 day ago)

OK, so I don't blame the GPUs crunching out the LLM lies, or the HTML on the page, I blame Google the company that programmed them.

[-] supamanc@lemmy.world 1 points 1 day ago

The point is, the LLM is not 'lying' to you. It's showing you information. It doesn't 'know' whether the information is true or not. It also doesn't 'care'. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said "technically, it's not lying, because lying requires intent to deceive, and LLMs don't have intent"

[-] hesh@quokk.au 1 points 1 day ago

What's the point of making this semantic difference though?

[-] supamanc@lemmy.world 1 points 1 day ago

Because 1) it's true and the article is a bit misleading as to whom is actually doing the lying and 2) it's important to remember that LLM are not sentient and to push back against the tide of language which subtly suggests they are.

[-] Specter@piefed.social 5 points 1 day ago

It doesn’t really matter whether it’s the Machine or the creator.

The point is, AIs can be programmed to lie, much like Grok does. And if they can be programmed to lie, then they are not reliable for anything at all. We are going through a decent period where AI can be used for a few things reliably, but even these will surely be enshittified.

[-] supamanc@lemmy.world 7 points 1 day ago

Oooh, philosophy! I disagree. I think that if a person programs a LLM to give disinformation, that's all it is. A lie giving misinformation knowing that's it's disinformation, intending do deceive. The LLM doesn't know what's true or false. It doesn't intend anything, because it is not a conscious entity. The person who programmed it can be lying by disseminating false information, the LLM cannot, any more than a broken clock or thermometer is 'lying' about the time or temperature.

[-] Specter@piefed.social 2 points 1 day ago

I am trying to get away from the philosophy actually 😅 in the end what matters is how these tools are being used, not so much their inherent characteristics.

Can you envision a world where AI chatbots will be used to lead you down certain political beliefs (e.g. capitalism good, socialism bad) product recommendations will be made based on how much brands are willing to pay for ad placements, and your psychological state will be measured and molded to the interests of the AI owner? I can. It’s also already happening.

[-] deliriousdreams@fedia.io 2 points 1 day ago

It matters because every time we anthropomorphize Generative AI LLM'S we re-enforce peoples belief in their ability to tell lies or truths.

People's believe is what leads to trust in them and things like AI psychosis.

An interesting way to look at it is AI also can't tell the truth.

What it does is generate the next likely word or words based on its most significant statistical positive in its database. So it doesn't know anything. It doesn't tell truth. It doesn't tell lies. It isn't an entity. The people behind it are allowing it to present information as factual and we have no reason to trust them.

this post was submitted on 08 Apr 2026
247 points (100.0% liked)

Technology

83633 readers
3823 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS