23
submitted 2 days ago* (last edited 2 days ago) by CyberSage@piefed.social to c/piefed_meta@piefed.social

I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

Worth checking out this related discussion:
Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

you are viewing a single comment's thread
view the rest of the comments
[-] WillStealYourUsername@piefed.blahaj.zone 1 points 2 days ago* (last edited 2 days ago)

I'm always concerned with these kinds of systems and how minorities would be treated within them. Plenty of anti trans stuff gets upvoted by non trans people from a number of other instances, both on and off trans instances. Any such system would favor the most popular opinions, disallowing anything else, at least from how I interpret them when they are explained to me.

There's also the issue that mods would still have to be a thing and they would need to be able to both ban and remove spam and unacceptable content, so how do you make sure these features aren't also used to just do moderation the old fashioned way?

And how do trusted users work in a federated system? Are users trusted on one server trusted on another? If so that makes things worse for minorities again and allows for abusive brigading. Are users only trusted on their home instance? If so that's better, but minorities are still at a disadvantage outside of their own instances.

There's also the issue with scale. Piefed/lemmy isn't large. What is the threshold to remove something? What happens when there's few reports on a racist post? How long does it get to stay up before enough time passes for it to accrue enough reports? Any such system would need to be scaled individually and automatically to the activity level of each community, which might be an issue in small comms. There are cases where non-marginalized people struggle to understand when something is marginalizing, so they defend it as free speech. What happens in these cases? Will there be enough minorities to remove it? I doubt it.

I'm sure there is some way to make some form of self-moderation, but it would need to be well thought out.

[-] OpenStars@piefed.social 1 points 21 hours ago

PieFed, unlike Lemmy, allows access to community-specific values, yes "karma" if you will. So if someone builds up a strong reputation and length of membership elsewhere, that will not help one iota within the specific community in question, if the mod chooses those settings (disclosure that I've only read of these but have no direct mod experience on a PieFed instance).

Also, at least at the instance level but it would probably be helpful to extend this model to a community one as well, votes can be differentially weighted from "trusted" instances, let's say those not known as spreaders of disinformation.

So someone could spin up 10 private instances and 10 accounts on each to attempt to influence the vote counts, and since Lemmy only allows "upvote" vs. "downvote", Lemmy will be susceptible to this kind of malicious interference, but PieFed offers multiple methods to limit and attempt to minimize this kind of behavior. e.g. each of those 100 alt accounts would need to be considered helpful members of the community and he upvoted often in order to karma farm sufficiently in order to then be able to influence voting patterns. Though let's face it, if someone is willing to go to all that amount of trouble, could they really be kept at bay by any automated - or even entirely manual - system? Generally the best that can be done is to raise the level of effort required so as to not be worth the reward, and PieFed certainly does that! (While Lemmy does little to nothing, at least directly although some instance admins have their own approaches, using a mixture of automated help and manual decision-making.)

[-] CyberSage@piefed.social 1 points 2 days ago

I appreciate your insights, but I see many issues raised without clear suggestions for how to enhance the moderation system effectively.

[-] Skavau@piefed.social 1 points 2 days ago

Well are you against the idea that an individual or a few people, whether or not they gain the position democratically or on a first-come-first server basis are allowed to moderate a community as they see fit?

this post was submitted on 03 Oct 2025
23 points (100.0% liked)

PieFed Meta

1658 readers
129 users here now

Discuss PieFed project direction, provide feedback, ask questions, suggest improvements, and engage in conversations related to the platform organization, policies, features, and community dynamics.

Wiki

founded 2 years ago
MODERATORS