You know, I've had an idea fermenting for some time now around how content moderation at scale might work. I have no idea if it's feasible, or not, nor do I have the technical expertise to bring it to fruition but I think the following pertinent points lead towards the capability of content moderation at scale:
- The 90-9-1 rule doesn't just apply to lurking and commenting on websites, it's about participation on many facets. It's who creates videos, it's who volunteers to moderate, it's really all aspects of user interaction
- People like to feel included and useful in communities and contribute in ways which work for them. For some, this is money, for some its the creation of art, some socialize, some connect, some offer goods and services, some trade, etc.
- Moderating content doesn't have to be so centralized. The final call on moderation doesn't even necessarily have to revolve around a single individual - it can be a crowd-sourced decision (it often is groups of moderators having conversations on more nuanced or important issues already).
- In-person content moderation, or communities policing behavior amongst itself often represents a lot of talking and a spread out reaction to an incident or incidents. Being a bad person in a small town might have many minor negative social consequences
When all of this combines, it makes you wonder if content moderation couldn't be accomplished more akin to how a small town might deal with a problematic individual - which is to say lots of small interactions with the problematic person, with some people helping, others chastising, some educating, their actions being more monitored, etc. How does this translate to a digital environment? That's the part I'm still trying to figure out. Perhaps comments that are problematic can be flagged by other users, such as in existing systems, but maybe this can fall into a queue where regular users or community members can vote on how appropriate it was and based on some kind of credit system (perhaps influenced by how much these people contribute or receive positive feedback in that particular community) determining the outcome of said comment. As it is, many of the conversational parts of this community feedback already happen (people both arguing with or pushing back against and educating or attempting to help). A system might even encourage or link up users with appropriate self-flagged educators who can talk directly with problematic individuals to help them learn and grow. Honestly, I don't know all the specifics, but I think it's interesting to think about.