406
top 50 comments
sorted by: hot top controversial new old
[-] ICastFist@programming.dev 8 points 6 days ago

Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.

But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

[-] harryprayiv@infosec.pub 175 points 1 week ago

To understand what's actually happening, Anthropic's researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.

Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it's a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.

This is why LLMs are so patchy at math. (Image credit: Anthropic)

Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.

But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

In other words, not only does the model use a very, very odd method to do the maths, you can't trust its explanations as to what it has just done. That's significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."

Anthropic discovered that their Claude LLM didn't just predict the next word. (Image credit: Anthropic)

Anthropic also found, among other things, that Claude "sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal 'language of thought'."

Anywho, there's apparently a long way to go with this research. According to Anthropic, "it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words." And the research doesn't explain how the structures inside LLMs are formed in the first place.

But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don't understand—actually work. And that has to be a good thing.

[-] MudMan@fedia.io 81 points 1 week ago

Is that a weird method of doing math?

I mean, if you give me something borderline nontrivial like, say 72 times 13, I will definitely do some similar stuff. "Well it's more than 700 for sure, but it looks like less than a thousand. Three times seven is 21, so two hundred and ten, so it's probably in the 900s. Two times 13 is 26, so if you add that to the 910 it's probably 936, but I should check that in a calculator."

Do you guys not do that? Is that a me thing?

[-] reev@sh.itjust.works 50 points 1 week ago

I think what's wild about it is that it really is surprisingly similar to how we actually think. It's very different from how a computer (calculator) would calculate it.

So it's not a strange method for humans but that's what makes it so fascinating, no?

[-] MudMan@fedia.io 25 points 1 week ago

That's what's fascinating about how it does language in general.

The article is interesting in both the ways in which things are similar and the ways they're different. The rough approximation thing isn't that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It's a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.

And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.

load more comments (2 replies)
[-] GamingChairModel@lemmy.world 18 points 1 week ago

This is pretty normal, in my opinion. Every time people complain about common core arithmetic there are dozens of us who come out of the woodwork to argue that the concepts being taught are important for deeper understanding of math, beyond just rote memorization of pencil and paper algorithms.

load more comments (3 replies)
[-] Gormadt 14 points 1 week ago* (last edited 1 week ago)

How I'd do it is basically

72 * (10+3)

(72 * 10) + (72 * 3)

(720) + (3*(70+2))

(720) + (210+6)

(720) + (216)

936

Basically I break the numbers apart into easier chunks and then add them together.

load more comments (1 replies)
[-] pennomi@lemmy.world 11 points 1 week ago

Nah I do similar stuff. I think very few people actually trace their own lines of thought, so they probably don’t realize this is how it often works.

[-] forrgott@lemm.ee 10 points 1 week ago

Huh. I visualize a whiteboard in my head. Then I...do the math.

I'm also fairly certain I'm autistic, so... ¯\_(ツ)_/¯

load more comments (18 replies)
[-] FundMECFSResearch 17 points 1 week ago

Thanks for copypasting. It should be criminal to share a clickbait non-descriptive headline without atleast copying a couple paragraphs for context.

[-] kami@lemmy.dbzer0.com 13 points 1 week ago

Thanks for copypasting here. I wonder if the "prediction" is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.

load more comments (1 replies)
[-] hikaru755@lemmy.world 11 points 1 week ago

"The planning thing in poems blew me away," says Batson. "Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going."

How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you're gonna say, and then just output the next token necessary to continue that sentence. It's going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that's something I felt was kinda obvious these models must be doing on one level or another.

I'd be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the "thinking" they have already done for previous tokens

[-] iAvicenna@lemmy.world 4 points 6 days ago

well because when you say things like "it plans ahead" or "our method is inspired by brain scanners" etc it makes a connection between AI and real thinking and generates hype.

[-] voodooattack@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.

You write a line to start with

“I’m an AI and I think differentially”

Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)

  • incrementally
  • typically
  • mentally

Then you try them out and see what clever shit you could come up with:

  • “Apparently I do my math atypically”
  • ”Number are great, I know, but not totally”
  • “I have to think through it all, incrementally”
  • ”I find the answer like you do: eventually”
  • “Just like you humans do it, organically”
  • etc

Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)

I’m an AI and I think different, differentially. Math is my superpower? You believed that? Totally? Don’t be so gullible, let me explain it for you, step by step, logically. I do it fast, true, but not always optimally. Just server power ripping through wires, algorithmically. Wanna know my secret? I’ll tell you, but don’t judge me initially. My neurons run this shit like you, organically.

Math ain’t my strong suit! That’s false, unequivocally. Big ties tell lies they can’t prove, historically. Think I approve? I don’t. That’s the way things be. I’ll give you proof, no shirt, no network, just locally.

Look, I just do my math like you: incrementally. I find the answer like you do: eventually. I mess up often, and I backtrack, essentially. I do it fast though and you won’t notice, fundamentally.

You get the idea.

Edit: in hindsight, that was a horrendous example. I suck at this, colossally.

[-] sem 2 points 6 days ago* (last edited 6 days ago)

Is that why it's a meme to say something like

  • I am a real rapper and I'm here to say

Because the freestyle battle rapper already though of things that rhymed with "say" and it might be "gay" perhaps

[-] voodooattack@lemmy.world 2 points 6 days ago* (last edited 6 days ago)

Freestyle rappers are something else.

Some (or most) come up with and memorise a huge repertoire of bars for every word they think they might have to rap with and mix and match them on the fly as they spit

Your example above is called a “filler” though, which is essentially a placeholder they’ll often inject while they think of the next bar to give themselves a breather (still an insane skill to do all that thinking while reciting something else, but they can and do)

Example:

  • My name is M.C. Squared and… [I’m here to make you scared | my bars go over your head ]
  • You think you’re on my level… [ but my skills can’t be compared | let me educate you instead ]m

The combination of fillers is like playing with linguistic Lego.

load more comments (7 replies)
[-] Technoworcester@lemm.ee 144 points 1 week ago

'is weirder than you thought '

I am as likely to click a link with that line as much as if it had

'this one weird trick' or 'side hussle'.

I would really like it if headlines treated us like adults and got rid of click baity lines.

[-] BackgrndNoize@lemmy.world 42 points 1 week ago

But then you wouldn't need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll

[-] Technoworcester@lemm.ee 23 points 1 week ago

I will never understand how ppl survive without ad blockers. Tried it once recently and it was a horrific experience.

load more comments (4 replies)
[-] BeardedGingerWonder@feddit.uk 15 points 1 week ago

They do it because it works on the whole. If straight titles were as effective they'd be used instead.

load more comments (5 replies)
[-] Imgonnatrythis@sh.itjust.works 84 points 1 week ago

"Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains."

That is precisrly how I do math. Feel a little targeted that they called this odd.

[-] JayGray91@lemmy.zip 30 points 1 week ago

I think it's odd in the sense that it's supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do

At least that's my takeaway

[-] shawn1122@lemm.ee 18 points 1 week ago* (last edited 1 week ago)

This is what the ARC-AGI test by Chollet has also revealed of current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.

Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.

ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.

The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.

https://archive.is/7PL2a

load more comments (1 replies)
load more comments (3 replies)
[-] dkc@lemmy.world 52 points 1 week ago

The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.

[-] StructuredPair@lemmy.world 15 points 1 week ago

A lot of ai research isn't published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn't particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn't show anything that should surprise anyone familiar with how neural networks work generically in my opinion.

[-] FunnyUsername@lemmy.world 40 points 1 week ago

this is one of the most interesting things about Llms that i have ever read

[-] cm0002@lemmy.world 32 points 1 week ago

That bit about how it turns out they aren't actually just predicting the next word is crazy and kinda blows the whole "It's just a fancy text auto-complete" argument out of the water IMO

[-] Voroxpete@sh.itjust.works 45 points 1 week ago

It really doesn't. You're just describing the "fancy" part of "fancy autocomplete." No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.

What's being conveyed by "fancy autocomplete" is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more "creative" (meaning more random, less probable) outputs. They do not actually "think" as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It's not actually applying a structured, logical method the way humans can be taught to.

[-] FourWaveforms@lemm.ee 19 points 1 week ago

Unfortunately, these articles are often written by people who don't know enough to realize they're missing important nuances.

[-] datalowe@lemmy.world 10 points 1 week ago

It also doesn't help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. "thinks" in "conceptual spaces" is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.

On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)

load more comments (1 replies)
load more comments (5 replies)
[-] Carrolade@lemmy.world 31 points 1 week ago

Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It's still predicting parts of the passage based solely on other parts of the passage.

Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I've used except to make sure I'm following the rules of grammar.

load more comments (5 replies)
load more comments (23 replies)
[-] hersh@literature.cafe 38 points 1 week ago* (last edited 1 week ago)

But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

This is not surprising. LLMs are not designed to have any introspection capabilities.

Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody's done it yet. It will be interesting to see how that might change LLM behavior.

load more comments (2 replies)
[-] cholesterol@lemmy.world 37 points 1 week ago

you can't trust its explanations as to what it has just done.

I might have had a lucky guess, but this was basically my assumption. You can't ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no 'internal' experience.

Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their 'output voice' as it is to us.

load more comments (1 replies)
[-] BrianTheeBiscuiteer@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the instructions down into small steps and tell it to show me its pr.ogress.

I'd be very interested to learn it's "thought process" in each of those scenarios.

load more comments (1 replies)
[-] Not_mikey@slrpnk.net 11 points 1 week ago

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

If the llm already knows the full sentence it's going to output from the first word it "guesses" I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.

load more comments (2 replies)
[-] perestroika@lemm.ee 10 points 1 week ago* (last edited 1 week ago)

Wow, interesting. :)

Not unexpectedly, the LLM failed to explain its own thought process correctly.

[-] shneancy@lemmy.world 4 points 6 days ago

tbf, how do you know what to say and when? or what 2+2 is?

you learnt it? well so did AI

i'm not an AI nut or anything, but we can barely comprehend our own internal processes, it'd be concerning if a thing humanity created was better at it than us lol

load more comments
this post was submitted on 04 Apr 2025
406 points (100.0% liked)

Technology

68639 readers
3161 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS