470
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

top 50 comments
sorted by: hot top controversial new old
[-] zeppo@lemmy.world 218 points 1 year ago

I’m still confused that people don’t realize this. It’s not an oracle. It’s a program that generates sentences word by word based on statistical analysis, with no concept of fact checking. It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

[-] Zeth0s@lemmy.world 32 points 1 year ago

Publish or perish, that's why

[-] agressivelyPassive@feddit.de 6 points 1 year ago

I'm trying really hard for the latter.

[-] fubo@lemmy.world 23 points 1 year ago

It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

Sure, the world should just trust preconceptions instead of doing science to check our beliefs. That worked great for tens of thousands of years of prehistory.

[-] zeppo@lemmy.world 28 points 1 year ago* (last edited 1 year ago)

It's not merely a preconception. It's a rather obvious and well-known limitation of these systems. What I am decrying is that some people, from apparent ignorance, think things like "ChatGPT can give a reliable cancer treatment plan!" or "here, I'll have it write a legal brief and not even check it for accuracy". But sure, I agree with you, minus the needless sarcasm. It's useful to prove or disprove even absurd hypotheses. And clearly people need to be definitely told that ChatGPT is not always factual, so hopefully this helps.

[-] adeoxymus@lemmy.world 8 points 1 year ago

I'd say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

load more comments (1 replies)
[-] PetDinosaurs@lemmy.world 9 points 1 year ago

Why the hell are people down voting you?

This is absolutely correct. We need to do the science. Always. Doesn't matter what the theory says. Doesn't matter that our guess is probably correct.

Plus, all these studies tell us much more than just the conclusion.

[-] yiliu@informis.land 7 points 1 year ago

"After an extensive three-year study, I have discovered that touching a hot element with one's bare hand does, in fact, hurt."

"That seems like it was unnecessary..."

"Do U even science bro?!"

Not everything automatically deserves a study. Were there any non-rando people out there claiming that ChatGPT could totally generate legit cancer treatment plans that people could then follow?

load more comments (2 replies)
[-] net00@lemm.ee 20 points 1 year ago

Yeah this stuff was always marketed to automate simple and repetitive things we do daily. it's mostly the media I guess who started misleading everyone into thinking this was AI like skynet. It's still useful, not just as a all knowing AI god

[-] inspxtr@lemmy.world 17 points 1 year ago

while I agree it has become more of a common knowledge that they’re unreliable, this can add on to the myriad of examples for corporations, big organizations and government to abstain from using them, or at least be informed about these various cases with their nuances to know how to integrate them.

Why? I think partly because many of these organizations are racing to adopt them, for cost-cutting purposes, to chase the hype, or too slow to regulate them, … and there are/could still be very good uses that justify it in the first place.

I don’t think it’s good enough to have a blanket conception to not trust them completely. I think we need multiple examples of the good, the bad and the questionable in different domains to inform the people in charge, the people using them, and the people who might be affected by their use.

Kinda like the recent event at DefCon trying to exploit LLMs, it’s not enough we have some intuition about their harms, the people at the event aim to demonstrate the extremes of such harms AFAIK. These efforts can help inform developers/researchers to mitigate them, as well as showing concretely to anyone trying to adopt them how harmful they could be.

Regulators also need these examples in specific domains so they may be informed on how to create policies on them, sometimes building or modifying already existing policies of such domains.

[-] zeppo@lemmy.world 7 points 1 year ago

This is true and well-stated. Mainly what I wish people would understand is there are current appropriate uses, like 'rewrite my marketing email', but generating information that could result in great harm if inaccurate is an inappropriate use. It's all about the specific model, though - if you had a ChatGPT system trained extensively on medical information, it would result in greater accuracy, but still the information would need expert human review before any decision were made. Mainly I wish the media had been more responsible and accurate in portraying these systems to the public.

load more comments (1 replies)
[-] iforgotmyinstance@lemmy.world 12 points 1 year ago

I know university professors struggling with this concept. They are so convinced using an LLM is plagiarism.

It can lead to plagiarism if you use it poorly, which is why you control the information you feed it. Then proofread and edit.

[-] zeppo@lemmy.world 11 points 1 year ago

Another related confusion in academia recently is the 'AI detector'. It could easily be defeated with minor rewrites, if they were even accurate in the first place. My favorite misconception is there was a story of a professor who told students "I asked ChatGPT if it wrote this, and it said yes" which is just really not how it works.

[-] dual_sport_dork@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

This is why without some hitherto unknown or so far undeveloped capability of these sorts of LLM models, they'll never actually be useful for performing any kind of mission critical work. The catch-22 is this: You can't trust the AI to produce correct work without some kind of potentially dangerous, showstopping, or embarassing error. This isn't a problem if you're just, say, having it paint pictures. Or maybe even helping you twiddle the CSS on your web site. If there is a failure here, no one dies.

But what if your application is critical to life or safety? Like prescribing medical care, or designing a building that won't fall down, or deciding which building the drone should bomb. Well, you have to get a trained or accredited professional in whatever field we're talking about to check all of its work. And how much effort does that entail? As it turns out, pretty much exactly as much as having said trained or accredited professional do the work in the first place.

load more comments (2 replies)
[-] imperator3733@lemmy.world 63 points 1 year ago

No duh - why would it have any ability to do that sort of task?

[-] xkforce@lemmy.world 34 points 1 year ago* (last edited 1 year ago)

Part of the reason for studies like this is to debunk peoples' expectations of AI's capabilities. A lot of people are under the impression that cgatGPT can do ANYTHING and can think and reason when in reality it is a bullshitter that does nothing more than mimic what it thinks a suitable answer looks like. Just like a parrot.

load more comments (7 replies)
[-] Uncaged_Jay@lemmy.world 48 points 1 year ago

"Hey, program that is basically just regurgitating information, how do we do this incredibly complex things that even we don't understand yet?"

"Here ya go."

"Wow, this is wrong."

"No shit."

[-] JackbyDev@programming.dev 20 points 1 year ago* (last edited 1 year ago)

"Be aware that ChatGPT may produce wrong or inaccurate results, what is your question?"

How beat cancer

wrong, inaccurate information

😱

load more comments (1 replies)
[-] sentient_loom@sh.itjust.works 42 points 1 year ago

Why the fuck would anybody think a chat bot could create a cancer treatment plan?

[-] 5BC2E7@lemmy.world 12 points 1 year ago

Because it’s been hyped. They had announced it could pass the medical licensing exam with good scores. The belief that it can replace a doctor has already been put forward

load more comments (2 replies)
[-] CombatWombat1212@lemmy.world 39 points 1 year ago

When did they ever claim that was able to

load more comments (1 replies)
[-] elboyoloco@lemmy.world 37 points 1 year ago

Scientist: Askes question to magic conch about cancer.

Conch: "Trying shoving bees up your ass."

Scientists: 😡

load more comments (1 replies)
[-] Kodemystic@lemmy.kodemystic.dev 35 points 1 year ago

Who tf is asking chatgpt for cancer treatments anyway?

load more comments (2 replies)
[-] Pyr_Pressure@lemmy.ca 32 points 1 year ago

Chatgpt is a language / chatbot. Not a doctor. Has anyone claimed that it's a doctor?

[-] Agent641@lemmy.world 7 points 1 year ago

Chatgpt fails at basic math, and lies ablut the existence of technical documentation.

I mostly use it for recipe inspuration and discussing books Ive read recently. Just banter, you know? Nothing mission-critical.

load more comments (3 replies)
[-] obinice@lemmy.world 32 points 1 year ago

Well, it's a good thing absolutely no clinician is using it to figure out how to treat their patient's cancer.... then?

I imagine it also struggles when asked to go to the kitchen and make a cup of tea. Thankfully, nobody asks this, because it's outside of the scope of the application.

[-] clutch@lemmy.ml 12 points 1 year ago

The fear is that hospital administrators equipped with their MBA degrees will think about using it to replace expensive, experienced physicians and diagnosticians

[-] whoisearth@lemmy.ca 11 points 1 year ago

They've been trying this shit for decades already with established AI like Big Blue. This isn't a new pattern. Those in charge need to keep driving costs down and profit up.

Race to the bottom.

load more comments (1 replies)
[-] Rexios@lemm.ee 27 points 1 year ago

Okay and? GPT lies how is this news every other day? Lazy ass journalists.

[-] TenderfootGungi@lemmy.world 20 points 1 year ago

The computer science classroom in my high school had a poster stating: “Garbage in garbage out”

[-] SirGolan@lemmy.sdf.org 19 points 1 year ago

What's with all the hit jobs on ChatGPT?

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

This is the second paper I've seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren't savvy enough to pick up on that and just run with "ChatGPT sucks!" Also it's not even ChatGPT if they're using that model. The paper is wrong (or it's old) because there's no way to use that model in the ChatGPT interface. I don't think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.

Anyway, tldr, paper is similar to "I tried running Diablo 4 on my Windows 95 computer and it didn't work. Surprised Pikachu!"

[-] eggymachus@sh.itjust.works 10 points 1 year ago

And this tech community is being weirdly luddite over it as well, saying stuff like "it's only a bunch of statistics predicting what's best to say next". Guess what, so are you, sunshine.

load more comments (11 replies)
[-] Sanctus@lemmy.world 18 points 1 year ago

These studies are for the people out there who think ChatGPT thinks. Its a really good email assistant, and it can even get basic programming questions right if you are detailed with your prompt. Now everyone stop trying to make this thing like Finn's mom in adventure time and just use it to helo you write a long email in a few seconds. Jfc.

load more comments (10 replies)
[-] LazyBane@lemmy.world 17 points 1 year ago

People really need to get in their heads that AI can "hallucinate" random information and that any implementation on an AI needs a qualified human overseeing it.

load more comments (1 replies)
[-] Prethoryn@lemmy.world 16 points 1 year ago

Look, I am all for seeing pros and cons. A.I. has a massive benefit to humanity and it has its issues but this article is just silly.

Why in the fuck are you using ChatGPT to set a cancer plan? When did ChatGPT claim to be a medical doctor.

Just go see a damn doctor.

[-] Kage520@lemmy.world 8 points 1 year ago

I have been getting surveys asking my opinion on ai as a healthcare practitioner (pharmacist). I feel like they are testing the waters.

AI is really dangerous for healthcare right now. I'm sure people are using it to ask regular questions they normally Google. I'm sure administrators are trying to see how they can use it to "take the pressure off" their employees (then fire some employees to "tighten the belt").

If they can figure out how to fact check the AI results, maybe my opinion can change, but as long as AI can convincingly lie and not even know it's lying, it's a super dangerous tool.

load more comments (1 replies)
load more comments (1 replies)
[-] NigelFrobisher@aussie.zone 15 points 1 year ago

People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.

load more comments (5 replies)
[-] Quexotic@infosec.pub 13 points 1 year ago* (last edited 1 year ago)

This is just stupid clickbait. Would you use a screwdriver as a hammer? No. Of course not. Anyone with even a little bit of sense understands that GPT is useful for some things and not others. Expecting it to write a cancer treatment plan, it's just outlandish.

Even GPT says:I'm not a substitute for professional medical advice. Creating a cancer treatment plan requires specialized medical knowledge and the input of qualified healthcare professionals. It's important to consult with oncologists and medical experts to develop an appropriate and effective treatment strategy for cancer patients. If you have questions about cancer treatment, I recommend reaching out to a medical professional.

[-] sturmblast@lemmy.world 13 points 1 year ago

Why is anyone surprised by this? It's not meant to be your doctor.

[-] eager_eagle@lemmy.world 12 points 1 year ago
load more comments (4 replies)
[-] SolNine@lemmy.ml 11 points 1 year ago

GPT has been utter garbage lately. I feel as though it's somehow become worse. I use it as a search engine alternative and it has RARELY been correct lately. I will respond to it, telling it that it is incorrect, and it will keep generating even more inaccurate answers... It's to the point where it almost becomes entirely useless, where sometimes it used to find some of the correct information.

I don't know what they did in 4.0 or whatever it is, but it's just plain bad.

load more comments (4 replies)
[-] mwguy@infosec.pub 10 points 1 year ago* (last edited 1 year ago)

I asked a retard to spend a week looking at medical treatment plans and related information on the internet. Then asked him to guestimate a treatment plan for my actual cancer patient. How could they have got it wrong!

This is how I translate all these AI Language model says bullshit, bullshit.

[-] j4yt33@feddit.de 9 points 1 year ago

Why would you ask it to do that in the first place??

[-] dmonzel@lemmy.world 7 points 1 year ago

To prove to all of the tech bros that ChatGPT isn't an actual AI, perhaps. At least that's the feeling I get based on what the article says.

[-] UnbeatenDeployGoofy@lemmy.ml 9 points 1 year ago

I suppose most sensible people already know that ChatGPT is not the answer for medical diagnosis.

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

If the researcher wanted to investigate whether LLM is helpful, they should develop a model specifically using cancer treatment plans with GPT-4/3.5 before testing it thoroughly, in addition to entering prompts into the model that is available on OpenAI.

load more comments (2 replies)
[-] KIM_JONG_JUICEBOX@lemmy.ml 8 points 1 year ago

Was this article summary written by chatgpt?

[-] autotldr@lemmings.world 7 points 1 year ago

This is the best summary I could come up with:


According to the study, which was published in the journal JAMA Oncology and initially reported by Bloomberg – when asked to generate treatment plans for a variety of cancer cases, one-third of the large language model's responses contained incorrect information.

The chatbot sparked a rush to invest in AI companies and an intense debate over the long-term impact of artificial intelligence; Goldman Sachs research found it could affect 300 million jobs globally.

Famously, Google's ChatGPT rival Bard wiped $120 billion off the company's stock value when it gave an inaccurate answer to a question about the James Webb space telescope.

Earlier this month, a major study found that using AI to screen for breast cancer was safe, and suggested it could almost halve the workload of radiologists.

A computer scientist at Harvard recently found that GPT-4, the latest version of the model, could pass the US medical licensing exam with flying colors – and suggested it had better clinical judgment than some doctors.

The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.


The original article contains 523 words, the summary contains 195 words. Saved 63%. I'm a bot and I'm open source!

load more comments
view more: next ›
this post was submitted on 26 Aug 2023
470 points (100.0% liked)

Technology

60113 readers
1739 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS