406
you are viewing a single comment's thread
view the rest of the comments
[-] db0@lemmy.dbzer0.com 77 points 7 months ago

"Hallucinate" is the standard term used to explain the GenAI models coming up with untrue statements

[-] Draegur@lemm.ee 25 points 7 months ago* (last edited 7 months ago)

in terms of communication utility, it's also a very accurate term.

when WE hallucinate, it's because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

when AIs hallucinate, it's due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

it's the same song, but played on a different instrument.

[-] arken@lemmy.world 7 points 7 months ago

when WE hallucinate, it's because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

Is it really? You make it sound like this is a proven fact.

[-] CosmicCleric@lemmy.world 5 points 7 months ago* (last edited 7 months ago)

Is it really? You make it sound like this is a proven fact.

I believe that's where the scientific community is moving towards, based on watching this Kyle Hill video.

[-] PipedLinkBot@feddit.rocks 2 points 7 months ago

Here is an alternative Piped link(s):

this Kyke Hill video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[-] Dasus@lemmy.world 2 points 7 months ago

I know I'm responding to a bot, but... how does a PipedLinkBot get "Kyle Hill" wrong to "Kyke Hill"? More AI hallucinations?

[-] FarceOfWill@infosec.pub 4 points 7 months ago

Op has a pencil in the top right, looks like it was edited

[-] Dasus@lemmy.world 2 points 7 months ago

True, I missed that

[-] KillingTimeItself@lemmy.dbzer0.com 2 points 7 months ago

i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.

[-] UmeU@lemmy.world 1 points 7 months ago

Better than one of those pesky unproven facts

[-] assassinatedbyCIA@lemmy.world 2 points 7 months ago

I think a more accurate term would be confabulate based on your explanation.

[-] Draegur@lemm.ee 1 points 7 months ago

you know what, i like that! I like that a lot!

[-] Prandom_returns@lemm.ee 8 points 7 months ago

What standard is that? I'd like a reference.

[-] QuaternionsRock@lemmy.world 20 points 7 months ago
[-] Prandom_returns@lemm.ee 2 points 7 months ago

It's as much as "Hallucination" as Tesla's Autopilot is an Autopilot

https://en.m.wikipedia.org/wiki/Tesla_Autopilot

I don't propagate techbro "AI" bullshit peddled by companies trying to make a quick buck

Also, in the world of science and technology a "Standard" means something. Something that's not a link to a wikipedia page.

It's still anthropomorphising software and it's fucking cringe.

[-] surewhynotlem@lemmy.world 26 points 7 months ago

Oh man, I'm excited for you. Today is the day you learn words can have two meanings! Wait until you see what the rest of the dictionary contains. It is crazy! But not actually crazy, because dictionaries don't have brains.

[-] Prandom_returns@lemm.ee 1 points 7 months ago

Wow, clever. Did you literally hallucinate this yourself or did you ask your LLM girlfriend for help?

And by literally, I mean figuratively.

[-] Semi_Hemi_Demigod@lemmy.world 21 points 7 months ago

You're gonna be real pissed to find out that computer bugs aren't literal bugs

[-] Prandom_returns@lemm.ee 1 points 7 months ago

I know it's a big word, but surely you can google what anthropomorphization is? Don't "ask" LLM, those things output garbage. Just google it.

[-] bbuez@lemmy.world 10 points 7 months ago

Watch out those software bugs may start crawling out of your keyboard

[-] laughterlaughter@lemmy.world 5 points 7 months ago* (last edited 7 months ago)

Like, literal garbage? The one sitting in my kitchen bin?!?!?!?!?!?!?!??!?!?!?!?!?!??!!?!

[-] QuaternionsRock@lemmy.world 1 points 7 months ago

No fucking shit it’s an anthropomorphization, nothing that can be hosted on GitHub has true human qualities…

The point is that everyone knows what it means within that context of AI, and using other terminology would only serve to obfuscate your message such that the average person couldn’t understand it as easily.

Non-living things also don’t have “behavior” (“the way in which someone conducts oneself or behaves”, but—hey look! People started anthropomorphizing things so much that it got added to the dictionary! (“the way in which something functions or operates”.)

It may not be ideal, and convince some people that LLMs are more human-like than they really are, but the one thing you haven’t done is suggest an alternative that would convey its meaning as effectively to the masses.

[-] FlyingSquid@lemmy.world 1 points 7 months ago

You call it a large language model, but there are much bigger things, it's only approximating a human language, and it isn't a physical model.

[-] T156@lemmy.world 1 points 7 months ago

Well, until a moth gets into your relays, anyhow.

[-] laughterlaughter@lemmy.world 14 points 7 months ago* (last edited 7 months ago)

Where have you been in the last two years, brah?

[-] june@lemmy.world 3 points 7 months ago

I’m a different person, but it’s the first time I’ve heard the term used. /shrug

[-] laughterlaughter@lemmy.world 4 points 7 months ago

Which is okay. I learn new things every day. I just find funny the fact that the other commenter is so fixated on the idea of "it can't be real because I never heard of it."

[-] Prandom_returns@lemm.ee 1 points 7 months ago

Not under the sole of fake hype.

[-] summerof69@lemm.ee 7 points 7 months ago

My boy, who hurt you?

this post was submitted on 28 Mar 2024
406 points (100.0% liked)

Technology

59299 readers
4122 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS