512
FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
(www.404media.co)
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it'd be worth investigating the training data usued for the model
Because what prompts people enter on their own computer isn't in their responsibility? Should pencil makers flag people writing bad words?
Isn't there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I'm at work.
This is a most excellent place for technology news and articles.