Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.
Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn't keep up with moderation. I don't remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.
Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn't need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don't get any medical support. So no matter what you think of AI and if it's moral, this is actually one of the few good applications in my opinion
I agree, but it's also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more