945
all 48 comments
sorted by: hot top controversial new old
[-] FaceDeer@fedia.io 43 points 1 month ago

Yeah, I'd much rather have random humans I don't know anything about making those "moral" decisions.

If you're already answered, "No," you may skip to the end.

So the purpose of this article is to convince people of a particular answer, not to actually evaluate the arguments pro and con.

[-] roguetrick@lemmy.world 37 points 1 month ago

What are you going to train it off of since basic algorithms aren't sufficient? Past committee decisions? If that's the case you're hard coding whatever human bias you're supposedly trying to eliminate. A useless exercise.

[-] Giooschi@lemmy.world 14 points 1 month ago

A slightly better metric to train it on would be chances of survival/years of life saved thanks to the transplant. However those also suffer from human bias due to the past decisions that influenced who got a transpant and thus what data we were able to gather.

[-] roguetrick@lemmy.world 7 points 1 month ago* (last edited 1 month ago)

And we do that with basic algorithms informed by research. But then the score gets tied and we have to decide who has the greatest chance of following though on their regimen based on things like past history and means to aquire the medication/go to the appointments/follow a diet/not drink. An AI model will optimize that based on wild demographic data that is correlative without being causative and end up just being a black box racist in a way that a committee that has to clarify it's thinking to other members couldn't, you watch.

[-] optissima@lemmy.ml 9 points 1 month ago

Nah bud, you just authorize whatever the doctor orders are because they are more knowledgable of the situation.

[-] Imgonnatrythis@sh.itjust.works 34 points 1 month ago

That's not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how "nice" they might be or how many vocal advocates they might have. This paper just states that current AIs aren't very good at what we would call moral judgment.

It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%

[-] MsPenguinette@lemmy.world 17 points 1 month ago

Tho those complicated outcome trends can have issues with things like minorities having worse health outcomes due to a history of oppression and poorer access to Healthcare. Will definitely need humans overseeing it cause health data can be misleading looking purely at numbers

[-] Imgonnatrythis@sh.itjust.works 4 points 1 month ago

I wouldn't say definitely. AI is subject to bias of course as well based on training, but humans are very much so, and inconsistently so too. If you are putting in a liver in a patient that has poorer access to healthcare they are less likely to have as many life years as someone that has better access. If that corellates with race is this the junction where you want to make a symbolic gesture about equality by using that liver in a situation where it is likely to fail? Some people would say yes. I'd argue that those efforts towards improved equality are better spent further upstream. Gets complicated quickly - if you want it to be objective and scientifically successful, I think the less human bias the better.

[-] phdepressed@sh.itjust.works 10 points 1 month ago

Creatinin in urine was used as a measure of kidney function for literal decades despite African Americans having lower levels despite worse kidneys by other factors. Creatinine level is/was a primary determinant of transplant eligibility. Only a few years ago some hospitals have started to use inulin which is a more race and gender neutral measurement of kidney function.

No algorithm matters if the input isn't comprehensive enough and cost effective biological testing is not.

[-] Imgonnatrythis@sh.itjust.works 3 points 1 month ago

Well yes. Garbage in garbage out of course.

[-] phdepressed@sh.itjust.works 2 points 1 month ago

That's my point, this is real world data, its all garbage, and no amount of LLM rehashing fixes that.

[-] Imgonnatrythis@sh.itjust.works 1 points 1 month ago

Sure. The goal is more perfect here, not perfect.

[-] StructuredPair@lemmy.world 9 points 1 month ago

Everyone likes to think that AI is objective, but it is not. It is biased by its training which includes a lot of human bias.

[-] Fades@lemmy.world 25 points 1 month ago

The death panels Republican fascists claim Democrats were doing are now here, and it's being done by Republicans.

I hate this planet

[-] HexesofVexes@lemmy.world 19 points 1 month ago

"Treatment request rejected, insufficient TC level"

[-] cm0002@lemmy.world 7 points 1 month ago

A Voyager reference out in the wild! LMAO

[-] HexesofVexes@lemmy.world 4 points 1 month ago

Had to be done. It's just too damn close not to.

[-] kemsat@lemmy.world 18 points 1 month ago

Yeah. It’s much more cozy when a human being is the one that tells you you don’t get to live anymore.

[-] petrol_sniff_king 3 points 1 month ago

Human beings have a soul you can appeal to?
Not every single one, but enough.

[-] SabinStargem@lemmings.world 16 points 1 month ago

I don't mind AI. It is simply a reflection of whoever is in charge of it. Unfortunately, we have monsters who direct humans and AI alike to commit atrocities.

We need to get rid of the demons, else humanity as a whole will continue to suffer.

[-] RangerJosey@lemmy.ml 5 points 1 month ago

If it wasn't exclusively used for evil it would be a wonderful thing.

Unfortunately we also have capitalism. So everything has to be just the worst all the time so that the worst people alive can have more toys.

[-] SabinStargem@lemmings.world 2 points 1 month ago

Thing is, those terrible people don't enjoy the everything that they already own, and don't understand that they are killing cool things in the crib. People make inventions and entertain if they can...because it is fun, and they think they got neat things to show the world. Problem is, prosperity is needed to allow people to have the luxury of trying to create.

The wealthy are murdering the golden geese of culture and technology. They won't be happier for it, and will simply use their chainsaw to keep killing humanity in a desperate wish of finding happiness.

Transplant Candidates:

Black American Man who runs a charity: Denied ❌️

President: Approved ✅️

All Hail President Underwood

[-] daniskarma@lemmy.dbzer0.com 6 points 1 month ago* (last edited 1 month ago)

I don't really know how it's better a human denying you a kidney rather than a AI.

It's not like it's something that makes more or less kidneys available for transplant anyway.

Terrible example.

It would have been better to make an example out of some other treatment that does not depend on finite recourses but only in money. Still, a human is now rejecting your needed treatments without the need of an AI, but at least it would make some sense.

In the end, as always, people who has chosen the AI as the "enemy" have not understand anything about the current state of society and how things work. Another example of how picking the wrong fights is a path to failure.

[-] ChogChog@lemmy.world 12 points 1 month ago

Responsibility. We’ve yet to decide as a society how we want to handle who is held responsible when the AI messes up and people get hurt.

You’ll start to see AI being used as a defense of plausible deniability as people continue to shirk their responsibilities. Instead of dealing with the tough questions, we’ll lean more and more on these systems to make it feel like it’s outside our control so there’s less guilt. And under the current system, it’ll most certainly be weaponized by some groups to indirectly hurt others.

“Pay no attention to that man behind the curtain”

[-] daniskarma@lemmy.dbzer0.com 1 points 1 month ago

Software have been implied in decision making for decades.

Anyway, the true responsible of a denial in a medical treatment has never been account responsible (except for our angel Luigi), no matter if AI has been used or not.

[-] TankovayaDiviziya@lemmy.world 5 points 1 month ago

Say what you will about Will Smith, but his movie iRobot made a good point about this 17 years ago.

(damn I'm old)

[-] egidighsea@lemmy.dbzer0.com 3 points 1 month ago

The kidney would still be transplanted at the end, be the decision made by human or AI, no?

[-] faythofdragons@slrpnk.net 3 points 1 month ago

What's with the Hewlett Packard Enterprises badging at the top?

[-] jsomae@lemmy.ml 3 points 1 month ago

Let's get more kidneys out there instead with tax credits for donors.

[-] stevedice@sh.itjust.works 2 points 1 month ago

Hasn't it been demonstrated that AI is better than doctors at medical diagnostics and we don't use it only because hospitals would have to take the blame if AI fucks up but they can just fire a doctor that fucks up?

[-] cynar@lemmy.world 8 points 1 month ago

I believe a good doctor, properly focused, will outperform an AI. AI are also still prone to hallucinations, which is extremely bad in medicine. Where they win is against a tired, overworked doctor with too much on his plate.

Where it is useful is as a supplement. An AI can put a lot of seemingly innocuous information together to spot more unusual problems. Rarer conditions can be missed, particularly if they share symptoms with more common problems. An AI that can flag possibilities for the doctor to investigate would be extremely useful.

An AI diagnostic system is a tool for doctors to use, not a replacement.

[-] stevedice@sh.itjust.works 1 points 1 month ago

Studies have also shown that doctors using AI don't do better than just doctors but AI on its own does. Although, that one is attributed to the doctors not knowing how to use chatgpt.

[-] cynar@lemmy.world 1 points 1 month ago

Do you have a link to that study? I'd be interested to see what the false positive/negative rates were. Those are the big danger of LLMs being used, and why a trained doctor would be needed.

[-] lightsblinken@lemmy.world 3 points 1 month ago
[-] HK65@sopuli.xyz 3 points 1 month ago

It is better at simple pattern recognition, but much worse at complex diagnoses.

It is useful as a help to doctors but won't replace them.

As an example, it can give you a good prediction on who likely has lung cancer out of thousands of CT images. It will completely fuck up prognoses and treatment recommendations though.

this post was submitted on 08 Mar 2025
945 points (100.0% liked)

Technology

68723 readers
2958 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS