146
Child Safety on Federated Social Media
(purl.stanford.edu)
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
basically we dont know what they found, because they just looked up hashtags, and then didnt look at the results for ethics reasons. They dont even say what hashtags they looked through.
We do know they only found, what, 112 actual images of CP? That's a very small number. I'd say that paints us in a pretty good light, relatively.
112 images out of 325,000 images scanned over two days, is about 0,03% So we are doing pretty well. With more moderation tools we could continue to knock out those sigmas.
it says 112 instances of known CSAM. But that's based on their methodology, right, and their methodology is not actually looking at the content, it's looking at hashtags and whether google safesearch thinks it's explicit. Which Im pretty sure doesnt differentiate with what the subject of the explicitness is. It's just gonna try to detect breast or genitals I imagine.
Though they do give a few damning examples of things like actual CP trading, but also that they've been removed.