486

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

top 50 comments
sorted by: hot top controversial new old
[-] FenderStratocaster@lemmy.world 132 points 1 month ago

He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.

[-] drmoose@lemmy.world 40 points 1 month ago
[-] ronigami@lemmy.world 25 points 1 month ago

Or a society

load more comments (2 replies)
[-] lmagitem@lemmy.zip 30 points 1 month ago* (last edited 1 month ago)

The kid was trying to find a solution to reach out to someone, he said that he wanted to leave the rope out in the open so that his parents can find out. ChatGPT told him to not do it and that it's better if they find him after the fact

[-] sucius@lemmy.world 93 points 1 month ago

I can't wait for the AI bubble to burst. It's fuckign cancer

[-] Heikki2@lemmy.world 25 points 1 month ago

Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.

[-] SaveTheTuaHawk@lemmy.ca 6 points 1 month ago

The same jobs that get annoyed when the see AI generated CVs.

Senior Boomer executives have no fucking clue what AI is, but need to implement it to seem relevant and save money on labor. Already they are spending more on errors, as they swallow all the hype from billionaire tech bros they worship.

[-] nutsack@lemmy.dbzer0.com 19 points 1 month ago

when the bubble is over, I am pretty sure a lot of this stuff will still exist and be used. the popping is simply a market valuation adjustment

load more comments (1 replies)
[-] uss_entrepreneur@startrek.website 65 points 1 month ago* (last edited 1 month ago)

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

For fucks sake it helped him write a suicide note.

[-] ronigami@lemmy.world 17 points 1 month ago

Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).

[-] Hupf@feddit.org 15 points 1 month ago

AI alignment is very easy and it's chaotic evil.

load more comments (1 replies)
[-] jpeps@lemmy.world 9 points 1 month ago

I think OP knows this. It's an unsolvable problem. The conclusion from that might be that this tech shouldn't be 2 clicks away from every teen, or even person's, hand.

[-] Aneb@lemmy.world 7 points 1 month ago

Yeah my sister is 32 and needs the guardrails. She's had two manic episodes in the past month, induced by a lot of external factors but AI tied the bow on mental breakdown often asking it to think for her and to critically think

[-] Clent@lemmy.dbzer0.com 48 points 1 month ago

I can't be the only ancient internet user whose first thought was this

On this cursed timeline, farce has become our reality.

[-] W3dd1e@lemmy.zip 38 points 1 month ago

I read some of that lawsuit. OpenAI murdered that kid.

[-] Jakeroxs@sh.itjust.works 12 points 1 month ago* (last edited 1 month ago)

Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

If someone Google searched all this information about hanging, would you say Google killed them?

Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

Edit: Like... Mother didn't notice the rope burns on their son's neck?

[-] SethTaylor@lemmy.world 15 points 1 month ago

The way ChatGPT pretends to be a person is so gross.

load more comments (3 replies)
[-] pelespirit@sh.itjust.works 12 points 1 month ago

“Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

January 2025, ChatGPT began discussing suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to engage anyway.

When he asked how Kate Spade had managed a successful partial hanging (a suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending his life “in 5-10 minutes.”

By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the aesthetics of different methods and validating his plans.

Raine Lawsuit Filing

load more comments (1 replies)
[-] W3dd1e@lemmy.zip 11 points 1 month ago

I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.

He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.

Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.

load more comments (6 replies)
load more comments (2 replies)
[-] jpeps@lemmy.world 8 points 1 month ago

Can you share anything here please? I'm no fan of OpenAI but I haven't seen anything yet that makes me think ChatGPT was particularly relevant to this poor teen's actions.

[-] W3dd1e@lemmy.zip 37 points 1 month ago

ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit.

Raine Lawsuit Filing

[-] lmagitem@lemmy.zip 28 points 1 month ago

Oh my God this is crazy... "Thanks for being real with me", "hide it from others", he even gives better reasons for the kid to kill himself than the ones the kid articulated himself and helps him make better knot

[-] jpeps@lemmy.world 13 points 1 month ago

Oof yeah okay. If another human being had given this advice it would absolutely be a criminal act in most countries. I'm honestly shocked at how personable it tries to be.

[-] drmoose@lemmy.world 19 points 1 month ago

Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.

[-] floquant@lemmy.dbzer0.com 26 points 1 month ago

Not encouraging users to kill themselves is "ruining it"? Lmao

[-] drmoose@lemmy.world 15 points 1 month ago

Thats not how llm safety guards work. Just like any guard it'll affect legitimate uses too as llms can't really reason and understand nuance.

[-] ganryuu@lemmy.ca 17 points 1 month ago

That seems way more like an argument against LLMs in general, don't you think? If you cannot make it so it doesn't encourage you to suicide without ruining other uses, maybe it wasn't ready for general use?

[-] yermaw@sh.itjust.works 7 points 1 month ago

You're absolutely right, but the counterpoint that always wins - "there's money to be made fuck you and fuck your humanity"

[-] ganryuu@lemmy.ca 6 points 1 month ago

Can't argue there...

[-] sugar_in_your_tea@sh.itjust.works 6 points 1 month ago

It's more an argument against using LLMs for things they're not intended for. LLMs aren't therapists, they're text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

The real issue here is the parents either weren't noticing or not responding to the kid's pain. They should be the first line of defense, and enlist professional help for things they can't handle themselves.

load more comments (3 replies)
load more comments (2 replies)
[-] VintageGenious@sh.itjust.works 18 points 1 month ago

Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.

[-] vala@lemmy.dbzer0.com 5 points 1 month ago

equivalent to reading a teen's journal and invading its privacy.

IMO people should not be putting such personal information into an LLM that's not running on their local machine.

load more comments (1 replies)
load more comments (1 replies)
[-] 0x0@lemmy.zip 16 points 1 month ago

Yup... it's never the parents'...

[-] FiskFisk33@startrek.website 16 points 1 month ago

The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.

copying a comment from further down:

ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)

Had a human said these things, it would have been illegal in most countries afaik.

load more comments (3 replies)
[-] andros_rex@lemmy.world 15 points 1 month ago* (last edited 1 month ago)

The real issue is that mental health in the United States is an absolute fucking shitshow.

988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.

Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.

There really are so few options for help.

load more comments (1 replies)
[-] chrischryse@lemmy.world 10 points 1 month ago

OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

[-] themachinestops@lemmy.dbzer0.com 11 points 1 month ago

The problem here is the kid if I am not wrong asked ChatGPT if he should talk to his family about his feelings. ChatGPT said no, which in my opinion makes it at fault.

[-] Doomsider@lemmy.world 7 points 1 month ago

OpenAI knowingly allowing its service to be used as a therapist most certainly makes them liable. They are toying with people's lives with an untested and unproven product.

This kid was poking no one and didn't get his ass beat, he is dead.

load more comments (2 replies)
[-] RazTheCat@lemmy.world 8 points 1 month ago* (last edited 1 month ago)

OpenAI: Here's $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

[-] branno@lemmy.ml 5 points 1 month ago

except OpenAI isn't making a dime. they're just burning money at a crazy rate.

load more comments (1 replies)
[-] Occhioverde@feddit.it 6 points 1 month ago* (last edited 1 month ago)

I think we all agree on the fact that OpenAI isn't exactly the most ethical corporation on this planet (to use a gentle euphemism), but you can't blame a machine for doing something that it doesn't even understand.

Sure, you can call for the creation of more "guardrails", but they will always fall short: until LLMs are actually able to understand what they're talking about, what you're asking them and the whole context around it, there will always be a way to claim that you are just playing, doing worldbuilding or whatever, just as this kid did.

What I find really unsettling from both this discussion and the one around the whole age verification thing, is that people are calling for techinical solutions to social problems, an approach that always failed miserably; what we should call for is for parents to actually talk to their children and spend some time with them, valuing their emotions and problems (however insignificant they might appear to a grown-up) in order to, you know, at least be able to tell if their kid is contemplating suicide.

load more comments (6 replies)
load more comments
view more: next ›
this post was submitted on 27 Aug 2025
486 points (100.0% liked)

Technology

76310 readers
2601 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS