They do just predict the next token, though, lol. That simplifies a significant amount, but fundamentally, that's how they work, and I'm not sure how you can say that's been falsified.
So I'm guessing you haven't seen Anthropic's newest interpretability research where when they went in assuming that was how it worked.
But it turned out that they can actually plan beyond the immediate next token in things like rhyming verse where the network has already selected the final word of the following line and the intermediate tokens are generated with that planned target in mind.
So no, they predict beyond the next token and we only just developed sensitive enough measurement to detect that occurring an order of magnitude of tokens beyond just 'next'. We'll see if further research in that direction picks up planning beyond that even.
Right, other words see higher attention as it builds a sentence, leading it towards where it "wants" to go, but LLMs literally take a series of words, then spit out then next one. There's a lot more going on under the hood as you said, but fundamentally that is the algorithm. Repeat that over and over, and you get a sentence.
If it's writing a poem about flowers and ends the first part on "As the wind blows," sure as shit "rose" is going to have significant attention within the model, even if that isn't the immediate next word, as well as words that are strongly associated with it to build the bridge.
The attention mechanism working this way was at odds with the common wisdom across all frontier researchers.
Yes, the final step of the network is producing the next token.
But the fact that intermediate steps have now been shown to be planning and targeting specific future results is a much bigger deal than you seem to be appreciating.
If I ask you to play chess and you play only one move ahead vs planning n moves ahead, you are going to be playing very different games. Even if in both cases you are only making one immediate next move at a time.
They do just predict the next token, though, lol. That simplifies a significant amount, but fundamentally, that's how they work, and I'm not sure how you can say that's been falsified.
So I'm guessing you haven't seen Anthropic's newest interpretability research where when they went in assuming that was how it worked.
But it turned out that they can actually plan beyond the immediate next token in things like rhyming verse where the network has already selected the final word of the following line and the intermediate tokens are generated with that planned target in mind.
So no, they predict beyond the next token and we only just developed sensitive enough measurement to detect that occurring an order of magnitude of tokens beyond just 'next'. We'll see if further research in that direction picks up planning beyond that even.
https://transformer-circuits.pub/2025/attribution-graphs/biology.html
Right, other words see higher attention as it builds a sentence, leading it towards where it "wants" to go, but LLMs literally take a series of words, then spit out then next one. There's a lot more going on under the hood as you said, but fundamentally that is the algorithm. Repeat that over and over, and you get a sentence.
If it's writing a poem about flowers and ends the first part on "As the wind blows," sure as shit "rose" is going to have significant attention within the model, even if that isn't the immediate next word, as well as words that are strongly associated with it to build the bridge.
The attention mechanism working this way was at odds with the common wisdom across all frontier researchers.
Yes, the final step of the network is producing the next token.
But the fact that intermediate steps have now been shown to be planning and targeting specific future results is a much bigger deal than you seem to be appreciating.
If I ask you to play chess and you play only one move ahead vs planning n moves ahead, you are going to be playing very different games. Even if in both cases you are only making one immediate next move at a time.