25
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 20 Apr 2025
25 points (100.0% liked)
TechTakes
1818 readers
55 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Innocuous-looking paper, vague snake-oil scented: Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents
Conclusions aren’t entirely surprising, observing that LLMs tend to go off the rails over the long term, unrelated to their context window size, which suggests that the much vaunted future of autonomous agents might actually be a bad idea, because LLMs are fundamentally unreliable and only a complete idiot would trust them to do useful work.
What’s slightly more entertaining are the transcripts.
You tell em, Claude. I’m happy for you to send these sorts of messages backed by my credit card. The future looks awesome!
I got around to reading the paper in more detail and the transcripts are absurd and hilarious:
And this is from Claude 3.5 Sonnet, which performed best on average out of all the LLMs tested. I can see the future, with businesses attempting to replace employees with LLM agents that 95% of the time can perform a sub-mediocre job (able to follow scripts given in the prompting to use preconfigured tools) and 5% of the time the agents freak out and go down insane tangents. Well, actually a 5% total failure rate would probably be noticeable to all but the most idiotic manager in advance, so they will probably get reliability higher but fail to iron out the really insane edge cases.