1458
Breast Cancer
(mander.xyz)
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
This is a science community. We use the Dawkins definition of meme.
If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors' time when the scan is so clean that even the AI doesn't see anything fishy.
Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it's DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don't let anything that's worth looking into go unnoticed.
Either way, as long as it isn't worse than humans in both kinds of failures, it's useful at saving medical resources.
an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!
This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They're intended use is exactly that - a "first look" and "second look". A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.
Nice comment. I like the detail.
For me, the main takeaway doesn't have anything to do with the details though, it's about the true usefulness of AI. The details of the implementation aren't important, the general use case is the main point.