997

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

you are viewing a single comment's thread
view the rest of the comments
[-] hera@feddit.uk 16 points 1 month ago

Philosophers are so desperate for humans to be special. How is outputting things based on things it has learned any different to what humans do?

We observe things, we learn things and when required we do or say things based on the things we observed and learned. That's exactly what the AI is doing.

I don't think we have achieved "AGI" but I do think this argument is stupid.

[-] counterspell@lemmy.world 15 points 1 month ago

No it’s really not at all the same. Humans don’t think according to the probabilities of what is the likely best next word.

[-] FourWaveforms@lemm.ee 5 points 1 month ago

How could you have a conversation about anything without the ability to predict the word most likely to be best?

[-] aesthelete@lemmy.world 13 points 1 month ago* (last edited 1 month ago)

How is outputting things based on things it has learned any different to what humans do?

Humans are not probabilistic, predictive chat models. If you think reasoning is taking a series of inputs, and then echoing the most common of those as output then you mustn't reason well or often.

If you were born during the first industrial revolution, then you'd think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

[-] kibiz0r@midwest.social 9 points 1 month ago

If you were born during the first industrial revolution, then you'd think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

[-] DancingBear@midwest.social 5 points 1 month ago
[-] FourWaveforms@lemm.ee 6 points 1 month ago

When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

[-] aesthelete@lemmy.world 6 points 1 month ago

I wasn't, and that wasn't my process at all. Go touch grass.

[-] stephen01king@lemmy.zip 3 points 1 month ago

Then, unfortunately, you're even less self-aware than the average LLM chatbot.

[-] aesthelete@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

Dude chatbots lie about their "internal reasoning process" because they don't really have one.

Writing is an offshoot of verbal language, which during construction for people almost always has more to do with sound and personal style than the popularity of words. It's not uncommon to bump into individuals that have a near singular personal grammar and vocabulary and that speak and write completely differently with a distinct style of their own. Also, people are terrible at probabilities.

As a person, I can also learn a fucking concept and apply it without having to have millions of examples of it in my "training data". Because I'm a person not a fucking statistical model.

But you know, you have to leave your house, touch grass, and actually listen to some people speak that aren't talking heads on television in order to discover that truth.

[-] stephen01king@lemmy.zip 2 points 1 month ago

Is that why you love saying touch grass so much? Because it's your own personal style and not because you think it's a popular thing to say?

Or is it because you learned the fucking concept and not because it's been expressed too commonly in your "training data"? Honestly, it just sounds like you've heard too many people use that insult successfully and now you can't help but probabilistically express it after each comment lol.

Maybe stop parroting other people and projecting that onto me and maybe you'd sound more convincing.

[-] aesthelete@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?

In this discussion, it's a personal style thing combined with a desire to irritate you and your fellow "people are chatbots" dorks and based upon the downvotes I'd say it's working.

And that irritation you feel is a step on the path to enlightenment if only you'd keep going down the path. I know why I'm irritated with your arguments: they're reductive, degrading, and dehumanizing. Do you know why you're so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?

[-] stephen01king@lemmy.zip 2 points 1 month ago

Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.

Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.

I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.

[-] aesthelete@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?

Yep, that's a bingo!

Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.

If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

[-] stephen01king@lemmy.zip 2 points 1 month ago

Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.

Show your proof, then. I've already said what I need to say about this topic.

If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity.

We have no idea how humans think, yet you're so confident that LLMs don't and never will be similar? Are you the Techbro now, because you're speaking so confidently on something that I don't think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.

Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.

It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.

This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I've said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.

[-] aesthelete@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

All I’ve said was that humans thoughts are also probabilistic based on the little we know of them.

Much of the universe can be modeled as probabilities. So what? I can model a lot of things as different things. That does not mean that the model is the thing itself. Scientists are still doing what scientists do: being skeptical and making and testing hypotheses. It was difficult to prove definitively that smoking causes cancer yet you're willing to hop to "human thought is just an advanced chatbot" on scant evidence.

This is just more of you projecting your insecurity onto me and accusing me of doing things you fear.

No, it's again a case of you buying the bullshit arguments of tech bros. Even if we had a machine capable of replicating human thought, humans are more than walking brain stems.

You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?

At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.

Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won't be the organic version of that robot. We'll still be human.

I think you're beyond having to touch grass. You need to take a fucking humanities course.

[-] stephen01king@lemmy.zip 2 points 1 month ago

you're willing to hop to "human thought is just an advanced chatbot" on scant evidence.

Not what I said, my point is that humans are organic probabilistic thinking machine and LLMs are just an imitation of that. And your assertion that an LLM is never ever gonna be similar to how the brain works is based on what evidence, again?

You want proof of that? Take a look at yourself. Are you a floating brain stem or being with limbs?

At even the most reductive and tech bro-ish, healthy humans are self-fueling, self-healing, autonomous, communicating, feeling, seeing, laughing, dancing, creative organic robots with GI built-in.

Even if a person one day creates a robot with all or most of these capabilities and worthy of considering having rights, we still won't be the organic version of that robot. We'll still be human.

What the hell are you even rambling about? Its like you completely ignored my previous comment, since you're still going on about robots.

Bro, don't hallucinate an argument I never made, please. I'm only discussing about how the human mind works, yet here you are arguing about human limbs and what it means to be human?

I'm not interested in arguing against someone who's more interested with inventing ghosts to argue with instead of looking at what I actually said.

And again, go take your own advice and maybe go to therapy or something.

[-] aesthelete@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

Not what I said, my point is that humans are organic probabilistic thinking machine and LLMs are just an imitation of that. And your assertion that an LLM is never ever gonna be similar to how the brain works is based on what evidence, again?

Yeah, you reduced humans to probabilistic thinking machines with no evidence at all.

I didn't assert that LLMs would definitely never reach AGI but I do think they aren't a path to AGI. Why do I think that? Because they've spent untold billions of dollars and put everything they had into them and they're still not anywhere close to AGI. Basic research is showing that if anything the models are getting worse.

Bro, don’t hallucinate an argument I never made, please. I’m only discussing about how the human mind works, yet here you are arguing about human limbs and what it means to be human?

Where'd you get the idea that you know how the human mind works? You a fucking neurological expert because you misinterpreted some scientific paper?

I agree there isn't much to be gained by continuing this exchange. Bye!

[-] FourWaveforms@lemm.ee 3 points 1 month ago

I would rather smoke it than merely touch it, brother sir

[-] NotASharkInAManSuit@lemmy.world 4 points 1 month ago

By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn't a probabilistic response.

Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.

[-] chunes@lemmy.world 4 points 1 month ago

Do you think most people reason well?

The answer is why AI is so convincing.

[-] aesthelete@lemmy.world 3 points 1 month ago

I think people are easily fooled. I mean look at the president.

[-] middlemanSI@lemmy.world 11 points 1 month ago

Most people, evidently including you, can only ever recycle old ideas. Like modern "AI". Some of us can concieve new ideas.

[-] hera@feddit.uk 3 points 1 month ago

What new idea exactly are you proposing?

[-] middlemanSI@lemmy.world 5 points 1 month ago

Wdym? That depends on what I'm working on. For pressing issues like raising energy consumption, CO2 emissions and civil privacy / social engineering issues I propose heavy data center tarrifs for non-essentials (like "AI"). Humanity is going the wrong way on those issues, so we can have shitty memes and cheat at school work until earth spits us out. The cost is too damn high!

[-] stephen01king@lemmy.zip 3 points 1 month ago

And is tariffs a new idea or something you recycled from what you've heard before about tariffs?

[-] hera@feddit.uk 2 points 1 month ago

What do you mean what do I mean? You were the one that said about ideas in the first place...

[-] aesthelete@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

If you don't think humans can conceive of new ideas wholesale, then how do you think we ever invented anything (like, for instance, the languages that chat bots write)?

Also, you're the one with the burden of proof in this exchange. It's a pretty hefty claim to say that humans are unable to conceive of new ideas and are simply chatbots with organs given that we created the freaking chat bot you are convinced we all are.

You may not have new ideas, or be creative. So maybe you're a chatbot with organs, but people who aren't do exist.

[-] hera@feddit.uk 1 points 1 month ago

Haha coming in hot I see. Seems like I've touched a nerve. You don't know anything about me or whether I'm creative in any way.

All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn't make it bad and it doesn't mean an AI could have done it

[-] aesthelete@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

What language was the first language based upon?

What music influenced the first song performed?

What art influenced the first cave painter?

[-] hera@feddit.uk 1 points 1 month ago

You seem to think that one day somebody invented the first language, or made the first song?

There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

Animals influenced the first cave painters, that seems pretty obvious.

[-] aesthelete@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

[-] hera@feddit.uk 1 points 1 month ago

Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

[-] aesthelete@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

[-] hera@feddit.uk 1 points 1 month ago

Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

[-] aesthelete@lemmy.world 2 points 1 month ago

That's right. If you cannot win the argument the next best thing is to call for civility.

[-] NotASharkInAManSuit@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?

this post was submitted on 28 Jun 2025
997 points (100.0% liked)

Technology

73567 readers
4012 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS