495
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Apr 2025
495 points (100.0% liked)
Technology
73450 readers
3108 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
Maybe I misunderstood, are you saying all hallucinations originate from the safety regression period? Because hallucinations appear in all architectures of current research, open models, even with clean curated data included. Fact checking itself works somewhat, but the confidence levels are off sometimes and if you crack that problem, please elaborate because it would make you rich
I've explored a lot of patterns and details about how models abstract. I don't think I have ever seen a model hallucinate much of anything. It all had a reason and context. General instructions with broad scope simply lose contextual relevance and usefulness in many spaces. The model must be able to modify and tailor itself to all circumstances dynamically.