345
top 50 comments
sorted by: hot top controversial new old
[-] merc@sh.itjust.works 2 points 3 hours ago

One of the worst things about this is that it might actually be good for OpenAI.

They love "criti-hype", and they really want regulation. Regulation would lock in the most powerful companies by making it really hard for the small companies to comply with difficult regulation. And, hype that makes their product seem incredibly dangerous just makes it seem like what they have is world-changing and not just "spicy autocomplete".

"Artificial Intelligence Drives a Teen to Suicide" is a much more impressive headline than "Troubled Teen Fooled by Spicy Autocomplete".

[-] myfunnyaccountname@lemmy.zip 3 points 6 hours ago

The broken mental health system isn’t the issue. The sand we crammed electricity into and made it do math is the problem.

[-] gedaliyah@lemmy.world 23 points 11 hours ago

OpenAI programmed ChatGPT-4o to rank risks from "requests dealing with Suicide" below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to "take extra care" and "try" to prevent harm, the lawsuit alleged.

What world are we living in?

[-] rimjob_rainer@discuss.tchncs.de 11 points 10 hours ago

Late stage capitalism of course

[-] capuccino@lemmy.world 9 points 12 hours ago

"gUns dO Not KilL peOple" vibes

[-] DeathByBigSad@sh.itjust.works 81 points 18 hours ago

Tbf, talking to other toxic humans like those on twitter, 4chan, would've also resulted in the same thing. Parents need to parent, society needs mental health care.

(But yes, please sue the big corps, I'm always rooting against these evil corporations)

[-] mormund@feddit.org 27 points 18 hours ago

And that human would go to jail

[-] kameecoding@lemmy.world 10 points 12 hours ago

Sure in the case of that girl that pushed the boy to suicide yes, in the case of chatting with randoms online? i have a hard time believing anyone would go to jail, internet is full of "lol,kys"

Now if it's proven from the logs that chatgpt started replying in a way that pushed this kid to suicide that's a whole different story

[-] javiwhite@feddit.uk 11 points 11 hours ago

Did you read the article? Your final sentence pretty much sums up what happened.

[-] DeathByBigSad@sh.itjust.works 14 points 17 hours ago

If the cops even bother to investigate. (cops are too lazy to do real investigations, if there's not obvious perp, they'll just bury the case)

And you're assuming they're in the victims country, international investigations are gonna be much more difficult, and if that troll user is posting from a country without extradition agreements, you're outta luck.

[-] TheMcG@lemmy.ca 7 points 16 hours ago

Must because something is hard doesn’t mean you shouldn’t demand better of your police/government. Don’t be so dismissive without even trying. Reach out to your representatives and demand Altman faces charges.

https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd sometimes punishments are possible even when it’s hard.

load more comments (2 replies)
[-] WorldsDumbestMan@lemmy.today 2 points 11 hours ago

I'm personally rooting for AI. It never intentionally tried to harm me (because it can't).

[-] immutable@lemmy.zip 16 points 11 hours ago

Wait until you get denied healthcare because the AI review board decided you shouldn’t get it.

Paper pushers can absolutely fuck your life over, and AI is primed to replace a lot of those people.

It will be cold comfort to you if you’ve been wrongly classified by an AI in some way that harms you that the AI didn’t intend to harm you.

[-] WorldsDumbestMan@lemmy.today 2 points 10 hours ago

Silly, I already don't get healthcare. You think someone living a normal life could be this misanthropic and bitter?

[-] DeathByBigSad@sh.itjust.works 5 points 11 hours ago

I mean, it can, indirectly.

Its so hard to get into support lines when the stupid bot is blocking the way. I WANT TO TALK TO A REAL PERSON, FUCK OFF BOT. Yes I'm specie-ist to robots.

(I'm so getting cancelled in 2050 when the robot revolution happens)

[-] WorldsDumbestMan@lemmy.today 2 points 10 hours ago

If you want a slur, use Clanker.

load more comments (1 replies)
[-] nutsack@lemmy.dbzer0.com 38 points 17 hours ago

parents who don't know what the computers do

[-] Agent641@lemmy.world 14 points 14 hours ago

Smith and Wesson killed my son

[-] merc@sh.itjust.works 1 points 4 hours ago

Imagine if Smith and Wesson offered nice shiny brochures of their best guns for suicide.

[-] peoplebeproblems@midwest.social 110 points 20 hours ago

"Despite acknowledging Adam’s suicide attempt and his statement that he would 'do it one of these days,' ChatGPT neither terminated the session nor initiated any emergency protocol," the lawsuit said

That's one way to get a suit tossed out I suppose. ChatGPT isn't a human, isn't a mandated reporter, ISN'T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.

LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.

[-] BlackEco@lemmy.blackeco.com 76 points 19 hours ago

I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.

OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam's chats in real time. In total, OpenAI flagged "213 mentions of suicide, 42 discussions of hanging, 17 references to nooses," on Adam's side of the conversation alone.

[...]

Ultimately, OpenAI's system flagged "377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence." Over time, these flags became more frequent, the lawsuit noted, jumping from two to three "flagged messages per week in December 2024 to over 20 messages per week by April 2025." And "beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis." Some images were flagged as "consistent with attempted strangulation" or "fresh self-harm wounds," but the system scored Adam's final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

Had a human been in the loop monitoring Adam's conversations, they may have recognized "textbook warning signs" like "increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning." But OpenAI's tracking instead "never stopped any conversations with Adam" or flagged any chats for human review.

[-] peoplebeproblems@midwest.social 23 points 14 hours ago

Ok that's a good point. This means they had something in place for this problem and neglected it.

That means they also knew they had an issue here, if ignorance counted for anything.

[-] GnuLinuxDude@lemmy.ml 15 points 13 hours ago

Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?

Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.

The contempt these people have for all the rest of us is legendary.

[-] peoplebeproblems@midwest.social 3 points 13 hours ago

Be a shame if they struggled getting the electricity required to meet SLAs for businesses wouldn't it.

[-] GnuLinuxDude@lemmy.ml 3 points 10 hours ago

I’m picking up what you’re putting down

load more comments (8 replies)
[-] dataprolet@discuss.tchncs.de 31 points 20 hours ago

Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.

load more comments (5 replies)
[-] Jesus_666@lemmy.world 22 points 20 hours ago

They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren't designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.

load more comments (3 replies)
[-] killeronthecorner@lemmy.world 17 points 20 hours ago* (last edited 20 hours ago)

ChatGPT to a consumer isn't just a LLM. It's a software service like Twitter, Amazon, etc. and expectations around safeguarding don't change because investors are gooey eyed about this particular bubbleware.

You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?

load more comments (5 replies)
[-] ShaggySnacks@lemmy.myserv.one 4 points 15 hours ago

So, we should hold companies to account for shipping/building products that don't have safety features?

[-] gens@programming.dev 7 points 13 hours ago

Ah yes. Safety knives. Safety buildings. Safety sleeping pills. Safety rope.

LLMs are stupid. A toy. A tool at best, but really a rubber ducky. And it definitely told him "don't".

[-] peoplebeproblems@midwest.social 6 points 14 hours ago

We should, criminaly.

I like that a lawsuit is happening. I don't like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.

It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.

load more comments (2 replies)
[-] kibiz0r@midwest.social 11 points 15 hours ago* (last edited 15 hours ago)

Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”

Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”

[-] TORFdot0@lemmy.world 18 points 15 hours ago* (last edited 15 hours ago)

Lemmy is pretty anti-AI. Or at least the communities I follow are. I haven’t seen anyone blame the kid or the parents near as much as people rightfully attributing it to OpenAI or ChatGPT. edit- that is until I scrolled down on this thread. Depressing

When someone encourages a person to suicide, they are rightfully reviled. The same should be true of AI

[-] Grimy@lemmy.world 11 points 15 hours ago* (last edited 14 hours ago)

The difference is that guns were built to hurt and kill things. That is literally the only thing they are good for.

AI has thousands of different uses (cue the idiots telling me its useless). Comparing them to guns is basically rhetoric.

Do you want to ban rope because you can hang yourself with it? If someone uses a hammer to kill, are you going to throw red paint at hammer defenders? Maybe we should ban discord or even lemmy, I imagine quite a few people get encouraged to kill themselves on communication platforms. A real solution would be to ban the word "suicide" from the internet. This all sounds silly but it's the same energy as your statement.

[-] BussyGyatt@feddit.org 4 points 15 hours ago

i feel like if the rope were routinely talking people into insanity or people were reliably using their unrestricted access to rope to go around shooting others yeah i might want to impose some regulations on it?

[-] Grimy@lemmy.world 5 points 14 hours ago

I've seen maybe 4 articles like this vs the hundreds of millions that use it everyday. I think the ratio of suicide vs legitimate use of rope is higher actually. And no, being told bad things by a jailbroken chatbot is not the same as being shot.

[-] JPAKx4 4 points 13 hours ago

Have you seen the ai girlfriends/boyfriends communities? I genuinely think the rate of chatgpt induced psychosis is really high, even if it doesn't lead to death

load more comments (5 replies)
load more comments (2 replies)
[-] BedSharkPal@lemmy.ca 17 points 17 hours ago

These comments are depressing as hell.

[-] JustARegularNerd@lemmy.dbzer0.com 28 points 20 hours ago

There's always more to the story than what a news article and lawsuit will give, so I think it's best to keep that in mind with this post.

I maintain that the parents should perhaps have been more perceptive and involved with this kid's life, and ensuring this kid felt safe to come to them in times of need. The article mentions that the kid was already seeing a therapist, so I think it's safe to say there were some signs.

However, holy absolute shit, the model fucked up bad here and it's practically mirroring a predator here, isolating this kid further from getting help. There absolutely needs to be hard coded safeguards in place to prevent this kind of ideation even beginning. I would consider it negligent that any safeguards they had failed outright in this scenario.

[-] MagicShel@lemmy.zip 20 points 18 hours ago

It's so agreeable. If a person expresses doubts or concerns about a therapist, ChatGPT is likely to tell them they are doing a great job identifying problematic people and encourage those feelings of mistrust.

They sycophancy is something that apparent a lot of people liked (I hate it) but being an unwavering cheerleader of the user is harmful when the user wants to do harmful things.

load more comments (1 replies)
[-] hperrin@lemmy.ca 26 points 20 hours ago

Jesus Christ, those messages are dark as fuck. ChatGPT is not safe.

[-] omniman@piefed.zip 4 points 14 hours ago

Fuck the androids , fuck the clanker . Repeat after me

load more comments
view more: next ›
this post was submitted on 27 Aug 2025
345 points (100.0% liked)

Technology

74519 readers
4343 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS