16
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 May 2026
16 points (100.0% liked)
TechTakes
2574 readers
69 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
That's really interesting. So the model can generalize the form of what a fact looks like based on these monofacts but ends up basically playing mad libs with the actual subjects. And if I understand the inverse correlation they were describing between hallucination rate and calibration, even their best mechanism to reduce this (which seems to have applied some kind of back-end doubling to the specific monofacts to make the details stand out as much as the structure, I think?) made the model less well-calibrated. Though I'm not entirely sure what "less well-calibrated" amounts to overall. I think they're saying it should be less effective at predicting the next token overall (more likely to output something nonsensical?) but also less prone to mad libs-style hallucinations.