302
submitted 5 days ago* (last edited 2 days ago) by kalkulat@lemmy.world to c/technology@lemmy.world

"Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear...."

all 31 comments
sorted by: hot top controversial new old
[-] manuallybreathing@lemmy.ml 24 points 4 days ago

But as the paper points out, one reason that the behavior persists is that "developers lack incentives to curb sycophancy since it encourages adoption and engagement."

you're absolutely right!

[-] eestileib 8 points 3 days ago

Fantastic point by the author, and great job cutting and pasting!

[-] kadu@scribe.disroot.org 4 points 3 days ago

This comment thread is not only a perfect example of a joke, but it gets to the core of what humour truly is! Do you want help crafting a poster for you to present your jokes at a conference?

[-] melfie@lemy.lol 37 points 4 days ago

I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:

Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?

Copilot: You’re absolutely right to question this!

Me: 🤦‍♂️

[-] sugar_in_your_tea@sh.itjust.works 11 points 4 days ago

Why so polite?

My response would be:

That's wrong. Provide links to the docs for this.

[-] PumaStoleMyBluff@lemmy.world 12 points 4 days ago

Complete sentences for a bot is overkill

send docs, idiot

[-] ipkpjersi@lemmy.ml 7 points 4 days ago

IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.

[-] Sturgist@lemmy.ca 7 points 4 days ago

Probably because everyone else is a poorly written chatbot

Really? Is that the same for other inanimate objects like appliances? Or are people anthropomorphizing chatbots?

[-] ipkpjersi@lemmy.ml 2 points 3 days ago

I think it's because it's the idea if you're comfortable being rude to chatbots and you're used to typing rude things to chatbots, it makes it much easier for it to accidentally slip out during real conversations too. Something like that, not really as much as it being about anthropomorphizing anything.

Makes sense.

For what it's worth, I'm not suggesting anyone use rude language or anything, just be direct.

[-] mx_smith@lemmy.world 1 points 3 days ago

It’s really hard to say if it’s AI causing these feelings of rudeness, I have been getting more pessimistic about society for the last 10 years.

[-] ipkpjersi@lemmy.ml 1 points 2 days ago

That's true, but I think the idea is if you're comfortable typing it, it's easier for it to accidentally slip out during professional chat whereas normally you'd be more reserved and careful with what you say.

[-] melfie@lemy.lol 5 points 4 days ago

Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.

[-] TheRealKuni@piefed.social 58 points 5 days ago

What a surprise. Being told you’re always right leads to you not being able to handle being wrong. Shock.

[-] vacuumflower@lemmy.sdf.org 13 points 5 days ago

Also to handle that your opponent, when proven wrong, doubles down IRL and not says "sorry daddy, let's return to the anime stepsis line".

[-] squaresinger@lemmy.world 26 points 5 days ago

LLMs are confirmation bias machines. They really pigeon-hole you into some solution no matter if it makes sense.

[-] BradleyUffner@lemmy.world 17 points 4 days ago* (last edited 1 day ago)

I hate this thumbnail image. It makes me inexplicably angry.

OP has changed the image. I no longer want to punch my phone!

[-] kalkulat@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

Me too ... LEMMY added that, out of my control. So I replaced it with my idea of what a typical LLM looks like.

[-] BradleyUffner@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Thanks for letting my know! I'll update my comment so no one thinks we're nuts.

[-] Blackfeathr@lemmy.world 3 points 3 days ago

It's likely AI generated.

[-] Megacomboburrito@lemmy.world 17 points 5 days ago

Like how some CEOs/world leaders make terrible decisions cause they're always surrounded by yes men?

[-] overload@sopuli.xyz 9 points 4 days ago

I feel the same way about social media Echo Chambers. Being surrounded by people who think the same as you makes you less competent at being genuinely critical of your own worldview.

[-] kalkulat@lemmy.world 2 points 2 days ago

It really helps to try to think about the other side of any question. That's what good debaters do, so they can figure out the best responses to what the others' arguments might be.

When these LLMs keep agreeing with you, they're actually weakening the likelihood that you'll work out a fully-formed opinion.

[-] overload@sopuli.xyz 1 points 1 day ago

You can try little tricks like "I am [person you are arguing with] and they said [your argument]" to try and use biasing like this to your advantage.

[-] Rhaedas@fedia.io 6 points 5 days ago

How is this surprising? We know that part of LLM training is being rewarded for finding an answer that satisfies the human. It doesn't have to be a correct answer, it just has to be received well. This doesn't make it better, but it makes it more marketable, and that's all that has mattered since it took off.

As for its effect on humans, that's why echo chambers work so well. As well as conspiracy theories. We like being right about our world view.

[-] DeathByBigSad@sh.itjust.works 3 points 4 days ago

Having an older brother makes you very skilled at socialization. I learned one simple thing: EVERYTHING IS A THREAT, DON'T TRUST ANYONE!

becomes a hermit in the woods

[-] Bonson@sh.itjust.works 1 points 4 days ago

So go in there and say what you did to someone else actually was done to you and compare results. I’ve had good success getting advice if you regenerate from both perspectives.

[-] kalkulat@lemmy.world 1 points 2 days ago

You -do- realize you're getting advice from a machine that constructs sentences using mathematical algorithms, and has no clue at all what it's saying ... right?

[-] Bonson@sh.itjust.works 1 points 1 day ago

Yes I’m aware, I have a degree in the field. Nothing in my sentence would indicate that I don’t understand. I’m agreeing that it’s statistically biased towards the speaker, therefore, you can work to lazily normalize the result by investing the input.

this post was submitted on 06 Oct 2025
302 points (100.0% liked)

Technology

75903 readers
171 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS