1748
we are safe (discuss.tchncs.de)
you are viewing a single comment's thread
view the rest of the comments
[-] Swedneck@discuss.tchncs.de 21 points 10 months ago

it's kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we've somehow regressed back from that point

[-] tryptaminev@feddit.de 23 points 10 months ago

did we really regress back from that?

i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.

But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.

this post was submitted on 20 Nov 2023
1748 points (100.0% liked)

Programmer Humor

32058 readers
1540 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS