1024
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 21 Nov 2025
1024 points (100.0% liked)
Technology
77040 readers
2630 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
It was asked leading questions about embarrassing topics, but the over-the-top, fawning praise for Musk was baked into the responses without being specifically requested in the prompts.
I think this is the main story. I don't think it's new info, but it confirms the issue persists: this LLM is so heavily trained to fawn over Musk that it doesn't exercise any application of context or attempt to find truth.
Which is sad.
No current AI understands context, they are just glorified word predictions!
I agree and go further; no AI understands anything. Just like a calculator doesn't understand "2+2=4", it's just pushing numbers around.
But at least the calculator is designed to combine numbers in the right order. LLMs, on the other hand, could come up with a different answer every time and different models would come up with wildly different answers based on their training data.
The calculator comes with guarantees.
Grok is Harrison Bergeron and Musk is Diana Glampers
It's really just an affirmation machine for tech bros
Yes, just some people figuring out that Grok was steered toward ass-kissing Musk no matter what, and exploited that for funny output. So the takeaways are: