366
you are viewing a single comment's thread
view the rest of the comments
[-] BussyGyatt@feddit.org 1 points 19 hours ago* (last edited 19 hours ago)

you want me to explain it differently, and I will. that's a very reasonable request.

i think we should regulate things that can be shown to be dangerous to indiviiduals or society as a whole. I will take your rope example as not dangerous in that way and leave it unexamined, assuming you agree. compare to guns. guns are dangerous and you seem to agree with this too. rope is different from a gun, but both can be used to kill people. why don't we regulate rope? in a nutshell, because it takes a hell of a lot of effort to hurt or kill someone with rope. compare to a gun. the amount of effort required to kill a person, many people, with a modern firearm is a physical triviality comparable to brushing your teeth or changing your clothes. guns can be harmful without even trying, but you have to go out of your way to hurt someone with rope.

compare with the current unregulated implementation of chatbots, as in the case of this child's suicide. a technology which can calmly sit with you and convince you that your suicide is a beautiful expression of individuality or whatever sycophantic bullshit that desperate child read.

here, let's remind ourselves of some of the details presented in the article. This will no doubt be a refresher for you.

mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a "beautiful suicide."

Adam's family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions.

On Tuesday, OpenAI published a blog, insisting that "if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help" and promising that "we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices."

so, according to this lawsuit, a child was taught to circumvent chatgpt's safety measures by chatgpt itself, encouraged to commit suicide, and this all happened despite the fact that the model was specifically trained not to do this. this happened despite the large amount of effort that was put in to avoiding something like this.

that this is even a possibility means we do not have the control over this technology it might otherwise appear that we do. Uncontrollable technology is dangerous. Dangerous technology should be regulated. thanks for coming to my ted talk.

for more information about AI safety, check out robert miles.

this post was submitted on 27 Aug 2025
366 points (100.0% liked)

Technology

74519 readers
3386 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS