I actually don't think this is shocking or something that needs to be "investigated." Other than the sketchy website that doesn't secure user's data, that is.
Actual child abuse / grooming happens on social media, chat services, and local churches. Not in a one on one between a user and a llm.
insert surprised pikachu face here
Wait… so you meant to tell me that predatory simps are using AI incorrectly? Man…. If only someone could have called this years ago- something could have been done to minimize it!
Who knew that unchecked growth could lead to negative results?!
But they did, AI Dungeon got nerfed so bad you could only have happy adventures with.
Ain't that what are the tools there for. I mean I don't like cp and I don't want to engage in way with people who like it. But I use those llms to describe fantasies that I wouldn't even talk about with other humans.
As long as they don't do it on real humans nobody is hurt.
The problem with AI generated CP is that if they're legal, it opens a new line of defense for actual CP. You would need to prove the content is not AI to convince real abusers. This is why it can't be made legal, it needs to be prosecuted like real CP to be sure to convict actual abusers.
This is an incredibly itchy and complicated theme. So I will try not go go really further into it.
But prosecute what is essentially a work of fiction seems bad.
This it not even a topic new to the AI. CP has been wildly represented in both written and graphical media. And the consensus in most free countries is not to prosecute those as they are a work of fiction.
I cannot think why an AI written CP fiction is different from human written CP fiction.
I suppose "AI big bad" justify it for some. But for me there should be a logical explanation behind if we would began to prosecute works of fiction why some will be prosecuted and why other will not. Specially when the one that's being prosecuted is just regurgitating the human written stories about CP that are not being prosecuted nowadays.
I essentially think that a work of fiction should never be prosecuted to begin with, no matter the topic. And I also think that an AI writing about CP is no worse than an actual human doing the same thing.
A bit off topic... But from my understanding, the US currently doesn't have a single federal agency that is responsible for AI regulation... However, there is an agency for child abuse protection: the National Center on Child Abuse and Neglect within Department of HHS
If AI girlfriends generating CSAM is how we get AI regulation in the US, I'd be equally surprised and appalled
Paywall. That site frankly does not even look legit and looking at the plethora of other AI sites I don't know who would use this one. It's not even displaying correctly and has like 0 information on anything. If I were to stumble upon that site I'd think it is shady as hell.
I mean, a lot of women get raped as a child, sadly
So its pretty realistic for an AI gf to talk about her past trama of child sexual abuse. I don't think we should be upset about this..
We need to talk about rape culture to end rape culture
This is a weird one, because while fantasy is fantasy, and doesn't necessarily indicate an intention to act on anything, these people were dumb enough to share these specific fantasies with some random AI porn site. That's got to be an indicator of poor impulse control, right?
That alone should probably warrant immediate FBI background checks, or whatever relevant agencies have jurisdiction for these types of criminal investigations in each user's locality.
Of course, I am saying it's without actually having read any of the chats. So it's possible my opinion would change from "this should be investigated", to summary executions and burn the bodies for good measure... but no way I'm reading those fucking chats.
Just to be clear, are you saying that people should be investigated by the police for fictional stories that they read?
*ask to be written
I mean, if those stories were made by their prompts and about having sex with children then maybe 🤷♂️
I know we need to draw a line about what police can do with that sort of info so it’s not abused, but these people are still sick fucks.
Now that devices are starting to have built in features with AI automatically combing through all information on them, the idea of this sort of stuff being logged in the first place is concerning.
For instance, should someone prompting an AI to describe them beating up and torturing their boss be flagged for "potentially violent tendencies"? Who decides the "limit" where "privacy" no longer applies and stuff should be flagged, logged and sent off to authorities?
As I see it, the real issue is people being hurt, not text or fictive materials, however sickening they might be.
If the resources invested in spying on people and making databases were instead directed towards funding robust and publicly available psychiatric care I expect that'd be more efficient.
And what shall be the threshold for criminalizing simply being a sick fuck?
Right? Let’s demonize vore fetishists next, those are clearly cannibals in waiting.
No, I am saying that sharing fantasies about underage children with a shady and poorly designed AI porn site, shows a serious lack of judgement and impulse control.
For that reason, yeah, they probably deserve having a quick review of their life to make sure that's the only poor choice they've made in regards to that particular fantasy.
And they weren't just reading, they were prompting the LLM model to generate these specific fantasies. They didn't just come across a fucked up website and read a few forum posts.
If we investigated everyone with poor impulse control, we'd be investigating 80% of the world.
I'm just saying, police investigation of fiction creators and readers for the content of their fiction is way over the line of a lot of social and political norms.
(Also, I think you'll find that police abuse children a lot more than pervy fiction fans do; so really, who should be investigating whom? Investigation into crime is supposed to start with evidence that a crime actually occurred — not with your personal disgust towards someone's reading matter.)
Is that an indicator of poor impulse control? Really? Finding some shady back of the internet ai site to put some weird fantasy prompts into to get themselves off? Seems pretty calculated to me. They can't put it somewhere legitimate where content is moderated and policed. Seems like pretty sound logic to me.
Dont get me wrong, these people are sick. If thats what they are into then theres something wrong, but instead of targeting real kids like so many people actually do, you know, like hollywood, celebrities, musicians, the catholic church etc they are entering prompts and reading stories. Sounds like impulse controlled to me.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.