354
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 27 Aug 2025
354 points (100.0% liked)
Technology
74519 readers
3458 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
I think the more damning part is the fact that OpenAI's automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
Ok that's a good point. This means they had something in place for this problem and neglected it.
That means they also knew they had an issue here, if ignorance counted for anything.
Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?
Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.
The contempt these people have for all the rest of us is legendary.
Be a shame if they struggled getting the electricity required to meet SLAs for businesses wouldn't it.
I’m picking up what you’re putting down
Human moderator? ChatGPT isn't a social platform, I wouldn't expect there to be any actual moderation. A human couldn't really do anything besides shut down a user's account. They probably wouldn't even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as
hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
As of a few weeks ago, a lot of ChatGpt logs got leaked via search indexing. So privacy was never really a concern for OpenAI, let's be real.
And it doesn't matter what they think what type of platform they run. Altman himself talks about it replacing therapy and how it can do everything. So in a reasonable world he'd have ungodly, personal liability for this shit. But let's see were it will go.
Those conversations were shared by the users and they checked a box saying to make it discoverable by web searches. I wouldn't call that "leaked", and openAI immediately removed the feature after people obviously couldn't be trusted to use it responsibly, so that kind of seems like privacy is a concern for them.
I forget the exact wording, but it was misleading. It was phrased like "make discoverable", but the actual functionality submitted each one directly for indexing.
At least to my understanding, which is filtered through shoddy tech journalism.
It was this, and they could have explained what it was doing in better detail, but it probably would have made those people even less likely to read it.
I can't tell if Altman is spouting marketing or really believe his own bullshit. AI is a toy and a tool, but it is not a serious product. All that shit about AI replacing everyone is not the case and in any event he wants someone else to build it in top of ChatGPT so the lability is theirs.
As for the logs I hadn't heard that and would want to understand the provenance and whether they contained PII other than what the user shared. Whether they are kept secure or not, making them available to thousands of moderators is a privacy concern.
I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI. I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.
My theory is they are letting people kill themselves to gather data, so they can predict future suicides...or even cause them.