93
top 7 comments
sorted by: hot top controversial new old
[-] YetAnotherNerd@sopuli.xyz 41 points 1 day ago

There’s a story about how they were teaching a model to detect tanks and thought it was awesome… and then they found out it had learned that particular types of trees correlated to the type of tank in the training data, so the AI just looked at the trees and responded on that.

[-] Sibbo@sopuli.xyz 43 points 1 day ago* (last edited 1 day ago)

There was actually a research paper where they figured they could predict if people drink wine or beer with a CNN that looks at knee X-rays. Turns out that part of their data is from beer drinking regions, and another part from wine drinking regions, and that the photos are just ever so slightly distorted depending on which physical machine they were made with.

The paper pointed that out, it's whole propose was to show how much bullshit you can use AI for if you are not careful what you train it with.

There were many other examples in this paper of what they can predict just from people's knee X-rays. They had all non-medical explanations such as the one above.

[-] YetAnotherNerd@sopuli.xyz 11 points 1 day ago

That’s fantastic, thanks for sharing!

[-] Davel23@fedia.io 21 points 1 day ago

There was something similar with a cancer-detecting model somewhat recently. It was trained on a bunch of scans both positive and negative. However, positive scans had to be signed off on by a doctor. So the model just ended up looking for signatures.

[-] TropicalDingdong@lemmy.world 13 points 1 day ago

Its the same issue with LLMs. Their responses are always the least adequate, minimum acceptable answer, with an OVERWHELMING amount of "caution garnish" to create the impression of it being a well thought out answer.

[-] Wolf314159@startrek.website 3 points 19 hours ago

That also sounds a lot like the kind of comments that Reddit (and Lemmy, and really any social network with votes) grooms for if you prefer up votes to arguing with pedants and trolls. Eventually all your left with are boring overqualified comments or inflammatory comments when the mob rules and you are striving/solving for the most popular/engaging answer. It's like conversational least squares analysis.

I wonder where the LLM trolls are? Maybe they are just so subtle, we haven't noticed them. Maybe LLMs aren't hallucinating answers, so much as they and trolling us. And here is where I qualify my answer in an attempt to quell the fools that might think anything I've said here implies that LLMs are anything close to sapient.

[-] acantharea@lemmy.world 1 points 1 day ago

So are we going with EBMs now??

this post was submitted on 03 Mar 2026
93 points (100.0% liked)

science

25671 readers
705 users here now

A community to post scientific articles, news, and civil discussion.

dart board;; science bs

rule #1: be kind

founded 2 years ago
MODERATORS