848
submitted 3 weeks ago* (last edited 3 weeks ago) by Allah@lemm.ee to c/technology@lemmy.world

LOOK MAA I AM ON FRONT PAGE

top 50 comments
sorted by: hot top controversial new old
[-] Nanook@lemm.ee 234 points 3 weeks ago

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[-] MNByChoice@midwest.social 79 points 3 weeks ago

The "Apple" part. CEOs only care what companies say.

[-] kadup@lemmy.world 56 points 3 weeks ago

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[-] homesweethomeMrL@lemmy.world 31 points 3 weeks ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

load more comments (4 replies)
[-] JohnEdwa@sopuli.xyz 26 points 3 weeks ago* (last edited 3 weeks ago)

"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect.

As Larry Tesler puts it, "AI is whatever hasn't been done yet.".

[-] kadup@lemmy.world 18 points 3 weeks ago

That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they're clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

[-] Grimy@lemmy.world 18 points 3 weeks ago

No, it shows how certain people misunderstand the meaning of the word.

You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

load more comments (9 replies)
[-] technocrit@lemmy.dbzer0.com 16 points 3 weeks ago* (last edited 3 weeks ago)

I'm going to write a program to play tic-tac-toe. If y'all don't think it's "AI", then you're just haters. Nothing will ever be good enough for y'all. You want scientific evidence of intelligence?!?! I can't even define intelligence so take that! \s

Seriously tho. This person is arguing that a checkers program is "AI". It kinda demonstrates the loooong history of this grift.

[-] JohnEdwa@sopuli.xyz 15 points 3 weeks ago* (last edited 3 weeks ago)

It is. And has always been. "Artificial Intelligence" doesn't mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it's a vast field of research in computer science with many, many things under it.

load more comments (3 replies)
load more comments (1 replies)
[-] Clent@lemmy.dbzer0.com 17 points 3 weeks ago

Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

load more comments (3 replies)
load more comments (25 replies)
[-] minoscopede@lemmy.world 68 points 3 weeks ago* (last edited 3 weeks ago)

I see a lot of misunderstandings in the comments 🫤

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[-] REDACTED@infosec.pub 14 points 3 weeks ago* (last edited 3 weeks ago)

What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

https://en.wikipedia.org/wiki/Reasoning_system

load more comments (4 replies)
load more comments (10 replies)
[-] mavu@discuss.tchncs.de 65 points 3 weeks ago

No way!

Statistical Language models don't reason?

But OpenAI, robots taking over!

[-] Jhex@lemmy.world 58 points 3 weeks ago

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[-] sev@nullterra.org 54 points 3 weeks ago

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

[-] kescusay@lemmy.world 17 points 3 weeks ago

I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

But I don't think we're anywhere near there yet.

load more comments (3 replies)
load more comments (34 replies)
[-] billwashere@lemmy.world 53 points 3 weeks ago

When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

load more comments (11 replies)
[-] brsrklf@jlai.lu 48 points 3 weeks ago

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[-] auraithx@lemmy.dbzer0.com 41 points 3 weeks ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

[-] technocrit@lemmy.dbzer0.com 17 points 3 weeks ago* (last edited 3 weeks ago)

Computers are awesome at "recognizing patterns" as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

[-] Mniot@programming.dev 42 points 3 weeks ago

I don't think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called "complex") puzzles. Like Towers of Hanoi but with 25 discs.

The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don't have an answer for why this is, but they suspect that the reasoning doesn't scale.

[-] bjoern_tantau@swg-empire.de 39 points 3 weeks ago* (last edited 3 weeks ago)
[-] GaMEChld@lemmy.world 39 points 3 weeks ago

Most humans don't reason. They just parrot shit too. The design is very human.

load more comments (3 replies)
[-] reksas@sopuli.xyz 35 points 3 weeks ago

does ANY model reason at all?

[-] 4am@lemm.ee 32 points 3 weeks ago

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[-] technocrit@lemmy.dbzer0.com 34 points 3 weeks ago* (last edited 3 weeks ago)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[-] TheRealKuni@midwest.social 31 points 3 weeks ago

Why would they "prove" something that's completely obvious?

I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

load more comments (2 replies)
[-] yeahiknow3@lemmings.world 23 points 3 weeks ago* (last edited 3 weeks ago)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[-] tauonite@lemmy.world 14 points 3 weeks ago

That's called science

load more comments (2 replies)
[-] FreakinSteve@lemmy.world 34 points 3 weeks ago

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

load more comments (4 replies)
[-] technocrit@lemmy.dbzer0.com 29 points 3 weeks ago* (last edited 3 weeks ago)

Peak pseudo-science. The burden of evidence is on the grifters who claim "reason". But neither side has any objective definition of what "reason" means. It's pseudo-science against pseudo-science in a fierce battle.

[-] surph_ninja@lemmy.world 27 points 3 weeks ago

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

[-] petrol_sniff_king 25 points 3 weeks ago

Maybe you failed all your high school classes, but that ain't got none to do with me.

load more comments (16 replies)
load more comments (1 replies)
[-] skisnow@lemmy.ca 25 points 3 weeks ago

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

load more comments (1 replies)
[-] RampantParanoia2365@lemmy.world 24 points 3 weeks ago* (last edited 3 weeks ago)

Fucking obviously. Until Data's positronic brains becomes reality, AI is not actual intelligence.

AI is not A I. I should make that a tshirt.

[-] JDPoZ@lemmy.world 14 points 3 weeks ago

It’s an expensive carbon spewing parrot.

load more comments (1 replies)
[-] mfed1122@discuss.tchncs.de 22 points 3 weeks ago* (last edited 3 weeks ago)

This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

[-] LesserAbe@lemmy.world 13 points 3 weeks ago

Agreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.

load more comments (2 replies)
load more comments (11 replies)
[-] sp3ctr4l@lemmy.dbzer0.com 20 points 3 weeks ago* (last edited 3 weeks ago)

This has been known for years, this is the default assumption of how these models work.

You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.

Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

load more comments (2 replies)
[-] LonstedBrowryBased@lemm.ee 20 points 3 weeks ago

Yah of course they do they’re computers

[-] flandish@lemmy.world 19 points 3 weeks ago

stochastic parrots. all of them. just upgraded “soundex” models.

this should be no surprise, of course!

[-] ZILtoid1991@lemmy.world 17 points 3 weeks ago

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

load more comments (3 replies)
[-] atlien51@lemm.ee 15 points 3 weeks ago

Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:

🫢

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 08 Jun 2025
848 points (100.0% liked)

Technology

72262 readers
2773 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS