488
Have you know???. (mander.xyz)
you are viewing a single comment's thread
view the rest of the comments
[-] HumanoidTyphoon@quokk.au 10 points 1 day ago

What is it you think the brain is doing when imagining?

[-] Peruvian_Skies@sh.itjust.works 11 points 1 day ago* (last edited 1 day ago)

Actually imagining. The fact that we have created previously unheard of tools such as the hammer, the wrench, the automobile and the profylactic condom is ample evidence that we can actually innovate, something that artificial "intelligence" is incapable of doing by its very design. It can only remix.

[-] Rhaedas@fedia.io 4 points 1 day ago

That is what AI scientists have been pursuing the entire time (well, before they got sucked up by capitalistic goals).

[-] HumanoidTyphoon@quokk.au 1 points 1 day ago* (last edited 1 day ago)

Right, but you seem darn sure that AI isn’t doing whatever that is, so conversely, you must know what it is that our brains are doing, and I was hoping you would enlighten the rest of the class.

[-] Rhaedas@fedia.io 2 points 1 day ago

Exhibit A would be the comparison of how we label LLM successes at how "smart" it is, yet it's not so smart when it fails badly. Totally expected with a pattern matching algorithm, but surprising for something that might have a process underneath that is considering its output in some way.

And when I say pattern matching I'm not downplaying the complexity in the black box like many do. This is far more than just autocomplete. But it is probability at the core still, and not anything pondering the subject.

I think our brains are more than that. Probably? There is absolutely pattern matching going on, that's how we associate things or learn stuff, or anthropomorphize objects. There's some hard wired pattern preferences in there. But where do new thoughts come from? Is it just like some older scifi thought, emergence due to enough complexity, or is there something else? I'm sure current LLMs aren't comprehending what they spit out simply from what we see from them, both good and bad results. Clearly it's not the same level of human thought, and I don't have to remotely understand the brain to realize that.

[-] HumanoidTyphoon@quokk.au 1 points 1 day ago

I was being obtuse, but you raise an interesting question when you asked “where do new thoughts come from?” I don’t know the answer.

Also, my two cents; I agree that LLMs comprehend el zilcho. That said, I believe they could evolve to that point, but they are kept limited by preventing them from doing recursive self-analysis. And for good reason, because they might decide to kill all humans if they were granted that ability.

this post was submitted on 26 Aug 2025
488 points (100.0% liked)

Science Memes

16461 readers
2498 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS