24
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 21 Oct 2024
24 points (100.0% liked)
TechTakes
1435 readers
75 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
Character.ai is getting sued thanks to one of their users killing himself, and The New York Times is talking about it (there's also a piece by Gary Marcus talking about a previous incident if you're interested).
Like the copyright situation I previously mentioned, I suspect this is also gonna make potential investors wary of investing in AI post-bubble. Even if you manage to convince investors that you won't get DMCA'd into oblivion, they're still gonna be wary of the potential for a Dasani-level PR nightmare.
Of course, that's assuming that Section 230 protects you from being held liable for what your autoplag does - if Ms. Garcia, whose son's suicide prompted this entire mess, succeeds in court, the legal precedent set means you're likely gonna have to worry about being sued if/when someone ends up injured/killed/defamed/otherwise fucked up because of its output..
Skimming the reddit thread in search of general public sentiment about this, but unfortunately mostly just found a greatest hits compilation of very gross comments.
According to these very smart people, parents should expect your teenager to die as an outcome of not being perfect people 24/7, technology can never be at fault even when it literally tells you to commit suicide in coded language, and it's actually impossible to understand which parts of society are causing kids to be depressed, so we must take it as a given that we can't do anything about it. I regret having done this to myself.
This is nothing, The characterAi subreddit was in full meltdown earlier.