In the medical industry, AI should stick to "look at this, it may be and you must confirm it." Any program that says "100% outperforms doctors" is bullshit and dangerous.
Why?
Basic safety that should be heavily regulated to prevent medical errors?
I know we live in the age of JavaScript where we don't give a fuck about quality anymore but it shouldn't be encouraged.
I'm both amused and mildly offended by the latter part of this comment.
... Well done, 10/10, no notes.
npm install cancerDiagnosis
Because, even today, you can’t and will never have a 100% reliable answer.
You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).
This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.
ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.
Let’s say we have a group of 10 people. 7 with cancer, 3 without.
If the AI detects cancer in 6 out of the 7, that’s a success of 86%.
If the AI detects cancer in 2 of the 3 healthy people, that’s a success of 100%.
So, operating the healthy ones always leads to a success and AI is trained by success. That’s why a human should look at the scans too, for now.
for now and always. medicine is something you dont want to entrust to automation.
Well, theoretically, an organism is nothing but a system running fully automatically. So I can see the possibility to have it fixed by another system. In the meantime, AI should support doctors, by making the invisible visible.
This is what AI should be used for. Not the generative crap ChatGPT peddles.
AI is perfect for applications looking at tons of different variables for specific patterns and are capable of being trained on new data cheaper than training every doctor in the country.
A doctor's first and primary goal is keeping a patient alive. Second is to normalize quality of life. Third is to minimize suffering when possible.
There is a HUGE and artificial shortage of doctors and healthcare providers in this country, and largely the world. They honestly don't have enough time to review every patient record, symptoms, and make a diagnosis and treatment plan, THEN do their continuing education and licensing requirements, AND do any research if they are mandated to do so by their employer, AND if they are at a teaching hospital - teach.
These AI tools can look at an entire medical record, symptoms, laboratory results, and pathology images and make a very accurate diagnosis that is always run by a physician before making a determination. AI doesn't forget what it's learned either.
I really wish “AI” would die; machine vision and convolutional neural networks used in this application don’t have much to do with the large language models most people think of with the modern incarnation of the term ai
don’t have much to do with the large language models
On a technical level I disagree: they're only using one convolution layer. The biggest change compared to previous work on the same dataset is the gated MLP, which is an idea that's inspired by transformers (1), which in their turn created the LLM that are hyped.
In general, I agree that AI is a useless marketing term.
Here's the paper: https://www.sciencedirect.com/science/article/pii/S2666990025000059?via=ihub
The confusion matrix and ROC curve are in section 5.2.
The image processing pipeline includes techniques from the 00s (in preprocessing such as otsu and watershed), to quite recent (gated MLP "transformers light").
I am able to identify 100% of cancer: just say "It is cancer" to each picture.
~~The article does not mention any other metrics than detection rate. What about recall etc.? Without it, this news is basically worthless.~~
I stand corrected, see the comments below. While the article still lacks important context, accuracy is well defined for this topic.
Accuracy in a classification context is defined as (N correct classifications / total classifications). So classifying everything as cancer would, in a balanced dataset, give you ~50% accuracy.
This article is indeed badly written PR fluff. I linked the paper in a sister comment. Both the confusion matrix and the ROC curve look phenomenal. Train/test/validation split seems fine too, as do the training diagnostics, so I'm optimistic that it isn't a case of overfitting.
Ofcourse 3rd party replication would be welcome, and I can't speak to the medical relevanve of the dataset. But the computer vision side of things seems well executed.
Thx for the comment! I edited my post accordingly.
with an impressive 99.26% accuracy.
I feel this would be a blatant lie if it included a bunch of false positives.
https://mander.xyz/comment/17810389
While keeping the FPR low, our model keeps the TPR high, showing that it can accurately find real cases while reducing false alarms.
I'm not educated enough to know what recall means in this context, but there's tables with percentages for it in the page. (Would love an explanation; I'm not sure what to search for to get the right definition.)
I'm not educated enough to know what recall means in this context
This wiki describes the terminology for a binary classification. I always have to refer to that page too, as it's very confusing :)
Thx for the comment! I edited my post accordingly.
one of the particularly good uses for AI! in fact it's so good and cheap that it'd actually be hard to turn a lot of profit on! which... hm....
this is not new
https://pmc.ncbi.nlm.nih.gov/articles/PMC10217496/
this AI is new though, but is it better?
If I state "every living creature that ever existed or will ever exist had, has, or will have cancer" I just diagnosed all the cancer in existence ... including cancer thousands of years from now. That is a 100% diagnosis rate.
But what would be the error rate?
The accuracy is provided if you read the article, the paper is also linked
Futurology