38

"Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as "trivial", even when their validity was crucial."

you are viewing a single comment's thread
view the rest of the comments
[-] pennomi@lemmy.world 2 points 1 week ago

LLMs are a lot more sophisticated than we initially thought, read the study yourself.

Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.

[-] bitofhope@awful.systems 22 points 1 week ago

Essentially they do not simply predict the next token

looks inside

it's predicting the next token

[-] pennomi@lemmy.world 1 points 1 week ago

Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.

The researchers were surprised by this too, they expected it to be the other way around.

[-] bitofhope@awful.systems 18 points 1 week ago

Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word "next". Alright they can predict tokens out of order, too. Very impressive I guess.

load more comments (1 replies)
load more comments (3 replies)
load more comments (27 replies)
this post was submitted on 07 Apr 2025
38 points (100.0% liked)

TechTakes

1788 readers
128 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS