34
you are viewing a single comment's thread
view the rest of the comments
[-] agent_flounder@lemmy.one 2 points 1 year ago

I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn't magic and it isn't emergent behavior. It isn't truly intelligent.

LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it's nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and "good" human responses.

There is no understanding behind it. No higher cognitive process. Just "what words go next based on Q&A training data." Which is why we get well written answers that are often total bullshit.

Even so, the tech could easily upend many writing careers.

[-] StalksEveryone@futurology.today 2 points 1 year ago

I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.

this post was submitted on 17 Sep 2023
34 points (100.0% liked)

Futurology

1851 readers
79 users here now

founded 1 year ago
MODERATORS