77
submitted 1 week ago by Arkouda@lemmy.ca to c/asklemmy@lemmy.world

To elaborate a little:

Since many people are unable to tell the difference between a "real human" and an AI, they have been documented "going rogue" and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can't see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called "AGI" because we have no proper example or understanding to create it and thus created what we knew. Us.

The "hallucinating" is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don't want to accept what we have already accomplished because we don't like looking into that mirror and seeing how simple our logical process' are mechanically speaking.

top 28 comments
sorted by: hot top controversial new old
[-] Chozo@fedia.io 28 points 1 week ago

I think the difference comes from understanding. When we inferior, fleshy ones "make up" information, it's usually based on our understanding (or misunderstanding) of the subject at hand. We will fill in the blanks in our knowledge with what we know about similar subjects.

An LLM doesn't understand its output, though. All it knows is that word_string_x immediately follows word_string_y in 84.821% of its training data, so that's what gets pasted next.

For us, making up false information comes from gaps in our cognition, from personal agendas, our own unique lived experiences, etc. For an LLM, these are just mathematical anomalies.

[-] Michal@programming.dev 22 points 1 week ago* (last edited 1 week ago)

AI is a very broad term that includes more than machine learning. Assuming you mean LLM

The differences are:

  • it does not learn from experience like humans do. They learn by training, which is separate from conversation where context window is limited
  • they only learn from text (thus name Language Model), so they do not understand other inputs like touch, sight, sound, taste, and many others
  • they do not think critically, take all input at face value, particularly that LLM cannot corroborate input information with experience from real world

Also if you cannot tell difference between real human and AI it's only because your interaction with AI is limited to text. If you can meet it like a real human, it'll be obvious that it's a computer not a person. If the image is blurry/pixelated enough, you couldn't tell a car from a house, that doesn't mean cars have become indistinguishable from houses.

[-] DacoTaco@lemmy.world 9 points 1 week ago* (last edited 1 week ago)

To add to this, this is how llm sessions 'get around' the experience issue: with every query/command/whatever the whole context and passed conversation is sent to it to be reprocessed. This is why in long sessions it takes longer and longer to generate a new response and why it will forget everything it 'learned' from your session when starting a new one

[-] NaibofTabr@infosec.pub 13 points 1 week ago

The term "hallucinate" is a euphemism being pushed by the AI peddlers.

It's a computer program. It doesn't "hallucinate", it has errors.

In all cases of ML models being sold by companies, what you are actually looking at is poorly tested software that is not fit for purpose, and has far less actual capability then what the marketing promises.

"Hallucination" in the context of LLMs is marketing bullshit designed to deflect from the reality that none of these programs have been properly quality checked and are extremely error prone.

If Excel gave bad answers for calculations 20% of the time it wouldn't be "hallucinating", it would just be broken, buggy software that requires more development time before distribution as a useful product.

[-] vrighter@discuss.tchncs.de 7 points 1 week ago

tfney aren't even errors. They are the system working as designed. The system is designed with randomness in mind so that the model can hallucinate, intentionally. The system can't ever be made reliable, not without some sort of paradigm shift.

[-] quediuspayu@lemmy.dbzer0.com 2 points 1 week ago

They call it hallucination because the first guy didn't know the word fabulation.

[-] ar1@lemmy.sdf.org 12 points 1 week ago

AI uses data to "guess" the most possible outcome. LLM uses that to pick the guess that has the highest probability to "sound correct" to human, and it is affected greatly by the data it used to train.

One thing which is very different is that AI/LLM doesn't take responsibility of what they say. Depending on their training data, they may tell someone to kill themselves if the human has incurable disease and ask for possible treatments. It is definitely odd if ever happens in human conversation. But because you don't like the answer and don't think it is "correct", you will say the AI is "hallucinating".

Like talking to a lion, you can mimic a roar but it's up to the lion to decide if it sounds nice or rude...

[-] missingno@fedia.io 10 points 1 week ago

From a theoretical perspective, it is entirely possible for code to simulate the activity of a human brain by simulating every neuron. And there would be deep philosophical questions to ask about the nature of thought and consciousness, is an electronic brain truly any different from a flesh one?

From a practical perspective, current technology simply isn't there yet. But it's hard to even describe the gap between how a LLM operates and how we operate, because our understandings of both LLMs and ourselves are honestly both very poor. Hard to say more than just... no, they're not alike. At least not yet.

[-] Endmaker@ani.social 9 points 1 week ago

what is the difference between current AI and the human brain?

My understanding is that: the fields of neuroscience and psychology are not developed enough (at this point in time) for anyone to provide a definitive answer to this question.

Anyone who claims otherwise would probably have to make assumptions, and may be talking out of their ass.

[-] nebulaone@lemmy.world 9 points 1 week ago* (last edited 1 week ago)

The only thing that can be said for sure is that the human brain uses both electricity and chemical reactions and seems to be capable of randomness, while the AI runs purely on electricity/code and isn't capable of randomness.

We don't know what consciousness is and we don't even know what life is so anything beyond this is pure speculation.

PS: Some of the answers here are demonstrably wrong, but I have learned not to get into arguments online anymore, it's better for your sanity.

[-] vrighter@discuss.tchncs.de 5 points 1 week ago* (last edited 1 week ago)

on the contrary, "temperature" is intentionally injected randomness in the process.

edit: which, to be clear] is not a good thing

[-] nebulaone@lemmy.world 4 points 1 week ago

Sure, but a computer can never give you a truly random number, it always has to calculate it from something. But, to be fair, we can't be certain that humans can either.

[-] missingno@fedia.io 2 points 1 week ago

True randomness can be done via specialized hardware. But I don't think that's a meaningful criteria to evaluate LLMs by in the first place here.

[-] nebulaone@lemmy.world 1 points 1 week ago

It would add an element of uncertainty and mutation.

[-] missingno@fedia.io 1 points 1 week ago

For all practical purposes, PRNGs are uncertain. True quantum randomness really only matters for cryptographic security, it's not important here.

[-] frezik 8 points 1 week ago

Let's clear some terms. Intelligence and consciousness are separate things that our language tends to conflate. Consciousness is the interpretation of sensory input. Hallucinations are what happen when your consciousness is misinterpreting that data.

You actually hallucinate to a minor degree all the time. For instance, pareidolia often takes the form of seeing human faces in rocks and clouds. Our consciousness is really tuned to patterns that look like human faces, and it sometimes gets it wrong.

We can actually do this to image recognition models. A model was tuned to finding dogs in movies. It could then modify the movie to show what it thought was there. It was then deliberately overtrained, and it output a movie with dogs all over the place.

The models definitely have some level of consciousness. Maybe not a lot, but some.

This is what I like about AI research. We learn about our own minds while studying it. But capitalism isn't using it in ways that are net helpful to humanity.

[-] PeriodicallyPedantic@lemmy.ca 8 points 1 week ago* (last edited 1 week ago)

That depends on how hardcore of a fatalist you are.

If you're purely a fatalist, then free will is an illusion, laws and punishment are immoral, consciousness is meaningless, and we nothing more than deterministic pattern matching machines, making us only different from LLMs in the details of our implementation and from the terrible optimization that evolution is known for.

But if you believe in some degree of free will, or you think there is value in consciousness, then we differ because LLMs are just auto-complete. They psudo-randomly choose from a weighted list of statistically likely words (actually token) that would come next given the context (which is the conversation history and prompt). There is no free will, no understanding any more than the man in the Chinese room understands Mandarin.

The whole conversation is so full of charged words because the LLM providers have intentionally anthropomorphized LLMs in their marketing, by using words like "reasoning". The APIs from before LLMs blew up provide a far less emotionally charged description of what LLMs do, with terms like "completions".
You wouldn't compare a human mind to your phone keyboard word prediction, but it's doing the same thing but scaled down. Where do you draw the line?

[-] lemmyknow@lemmy.today 2 points 1 week ago

Isn't that sorta what humans do? Picking words based on the ones used before, taking into consideration the context of the conversation?

[-] dee_dubs@lemmy.world 3 points 1 week ago

Not really. When asked a question, a human would think about the answer, and construct a sentence to try and express that point. An LLM doesn't know what the answer is ahead of time, it's not working towards a point, it's just statistically guessing the next couple of letters over and over again. The human equivalent would just be making random mouth noises and hoping the other person interprets them as words

[-] PeriodicallyPedantic@lemmy.ca 1 points 1 week ago

Only if, like I said, you're a hardcore fatalist.

[-] besselj@lemmy.ca 6 points 1 week ago

The difference is that a human often has to be held accountable when they make a mistake, so most humans will use logic and critical thinking when trying hard not to make mistakes, even if it takes longer than an LLM whose "reasoning" is more like a slot machine.

[-] Arkouda@lemmy.ca 5 points 1 week ago

I would argue that AI should be held to account for the information it provides, and until AI is capable of having a personal bank account, damages should be paid by the company who created it.

The only reason I see that AI doesn't "hold itself to account" is that it was never programmed to. Much like if you do not properly educate a young human, they will not be held accountable a lot of the time because we understand their actions are the result of how they were brought up and taught, or "programmed".

You do bring up a good point, but I see that as a failing on the Humans making the AI and restricting it, not a demonstration that AI wouldn't be capable of holding itself and its decisions to account if it was taught to like we need to be taught to.

The difference is in how the LLMs work vs animals brains work.

Animals brains use logic and reactions.

LLMs exclusively use statistics to generate their output. Even their "reasoning" is faked.

[-] potatoguy@potato-guy.space 4 points 1 week ago

How a brain that works and learns (differently) since inception (creating new paths between neurons or strenghtening paths, adapting to every touch, sound, temperature, body position, smell and image it ever experiences) different from multiple matmuls that calculate an optimization path based on dual numbers (to get the gradient to descend) and different path seeking algorithms like ADAM?

Idk, but I think that it has a different way of learning and being, a child learns by being with others, it experiences things, their brains are wired in such a way to learn those things, they make errors and are corrected, so they learn to not make things up or they start to make things up on their own on what they learned, there isn't a model on what there is to be learned, there isn't a focus, just living. Machine learning algorithms learn by trying to predict the next token (not even learning wrong things and getting corrected, its like those kids who when they don't know something they just make something up, not having learned to be wrong), the optimal way to guess a color on a upscaled image, to denoise a image to match a prompt, etc. These are very different in my understanding.

I believe (in 5 trillion years) that if wires could be wired in such a way to reorganize themselves based on simply by existing, like our brains, not matmul*100 trillion, these wires being a "robot" brain and the "robot" just being and being teached by a society that sees this "robot" like any person we see today, I believe this "robot" would turn out to be just like a person. Our brain is physical, with nothing special about it, if it takes another form, like the "robot" brain, it would behave in the same way.

So, in my idiotic opinion, the scope of the thing, what is the relation with the environment, what it does, how it does, makes it different than what a person would do, be, etc. I'm probably 300% wrong, so yeah.

[-] Hegar@fedia.io 4 points 1 week ago

People just massively overestimate what goes on in a brain, I think. It's not magic. Consciousness, experience and understanding feel ineffable but they're just chemicals and electricity.

There's no divine and irreproducible spark, no reason that silicon and metal couldn't produce a system that does everything a brain does.

[-] mojofrododojo@lemmy.world 2 points 1 week ago

we don't let AI sleep. of course it's growing psychotic.

just turn the shit off for a few days and see if it helps. you'd want a nap too.

[-] MantisToboggon@lazysoci.al 1 points 1 week ago
this post was submitted on 16 Jul 2025
77 points (100.0% liked)

Ask Lemmy

33594 readers
1404 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS