22
submitted 3 years ago* (last edited 2 years ago) by nutomic@lemmy.ml to c/announcements@lemmy.ml

Recently there have been some discussions about the political stances of the Lemmy developers and site admins. To clear up some misconceptions: Lemmy is run by a team of people with different ideologies, including anti-capitalist, communist, anarchist, and others. While @dessalines and I are communists, we take decisions collectively, and don't demand that anyone adopt our views or convert to our ideologies. We wouldn't devote so much time to building a federated site otherwise.

What's important to us is that you follow the site rules and Code of Conduct. Meaning primarily, no-bigotry, and being respectful towards others. As long as that is the case, we can get along perfectly fine.

In general we are open for constructive feedback, so please contact any member of the admin team if you have an idea how to improve Lemmy.

Slur Filter

We also noticed a consistent criticism of the built-in slur filter in Lemmy. Not so much on lemmy.ml itself, but whenever Lemmy is recommended elsewhere, a few usual suspects keep bringing it up. To these people we say the following: we are using the slur filter as a tool to keep a friendly atmosphere, and prevent racists, sexists and other bigots from using Lemmy. Its existence alone has lead many of them to not make an account, or run an instance: a clear net positive.

You can see for yourself the words which are blocked (content warning, link here). Note that it doesn't include any simple swear words, but only slurs which are used to insult and attack other people. If you want to use any of these words, then please stay on one of the many platforms that permit them. Lemmy is not for you, and we don't want you here.

We are fully aware that the slur filter is not perfect. It is made for American English, and can give false positives in other languages or dialects. We are totally willing to fix such problems on a case by case basis, simply open an issue in our repo with a description of the problem.

you are viewing a single comment's thread
view the rest of the comments
[-] PP44@lemmy.ml 1 points 3 years ago

I quite agree with you that moderation is hardly a machine job, and not saying it is the perfect solution. It sure as it's drawback. I am just arguing that the benefits outweigh them. I would prefer to be in a world where there are not needed, be as of the world today, I admit I prefer having this filter rather than not having it, mostly because of the systemic effects I explained.

I agree that the relevance of he content of the filter can be discussed too, and that banning some words can make it difficult to discuss certain topics. But I think some words are almost always meant to harm, and can be easily replace by more positive or neutral term.

As a direct example : I can talk in this post about homosexuality, and I can event paraphrase to talk about the way some f word is used as a slur for it and how I think allowing it here isn't a good idea in my opinion. See, I can talk about it, be respectful about it. I just prevent to call you a [insert here whatever banned slur] pretending to use my free speech.

[-] southerntofu@lemmy.ml 1 points 3 years ago

I prefer having this filter rather than not having it, mostly because of the systemic effects I explained.

That's also the case for me, in case that was not clear :)

I think some words are almost always meant to harm, and can be easily replace by more positive or neutral term.

I don't think it's that easy, because of the context. Should all usage of the n***** word by black people be prevented? Should all usage of w****/b**** words by queer/femmes folks in a sex-positive context be prevented? etc.. I agree with you using these words is most times inappropriate and we can find better words for that, however white male technologists have a long history of dictating how the software can be used (and who it's for) and i believe there's something wrong in that power dynamic in and of itself. It's not uncommon that measures of control introduced "to protect the oppressed" turn into serious popular repression.

Still, like i said i like this filter in practice, and it's part of the reason i'm here (no fascism policy). As a militant antifascist AFK, i need to reflect on this and ponder whether automatic censorship is ok in the name of antifascism: it seems pretty efficient so far, if only as a psychological barrier. And i strongly believe we should moderate speech and advertise why we consider certain words/concepts to be mental barriers, but i'm really bothered on an ethical level to just dismiss content without human interaction. Isn't that precisely what we critique in Youtube/Facebook/etc? I'm not exactly placing these examples on the same level as a slur filter though ;)

[-] PP44@lemmy.ml 1 points 3 years ago* (last edited 3 years ago)

As often in cool debate, I think in the end we mostly agree. I especially agree with you on the point that reclaiming a word is a valid way of using some slur, and that it should not be to a privileged group to choose when a word is ok or not. On this point I have to point out that this is still the case with manual moderation, if most moderator are privileged. So I agree that diversity should be push in all places of power, and all decision are better made (and more legitimate) with a diversity in the group that make them.

But on the automated part, I really think the psychological aspect is strong and should be questioned. You talk about "human interaction" but this definition is really hard non only to define, but also to defend as an efficient way of reaching you goals. I am quite sure that when the devs made their filter, there was quite a lot of human interaction and debate around it, and the simple fact the put one show that they interacted with other people around them. And is a "manual" moderation a human interaction when you don't see or know the person, don't know their culture, the context, their tone, etc. Moderation will never be perfect, will always involve bad decisions, errors. When errors are mades "directly" by humans, compassion and empathy help us to try and understand before judging (but judging nonetheless in the end don't get me wrong). Why is it so different when an automated system (created by an imperfect human) ? Why is an automated error worse than a human one if the consequences are the same ?

Long story short, I don't like thinking along great principles like "automated moderation is dangerous", but rather try analyze the situation and think : would this place be better if there was not this automated moderation ? I agree that this is a wide and difficult debate one what is "better" of course, but the focus should always be this one : how to make things better.

Thank you so much for your answer, i'm not used to debate online because I didn't feel at ease anywhere else before, but I love it and it is thanks to people like you and all the other interesting answers I get that I can enjoy that and think about it so much ! Thank you thank you <3 !!

(edit : typo)

[-] nutomic@lemmy.ml 1 points 3 years ago* (last edited 3 years ago)

Thanks for your comment, I'm really happy to read something like this. I'm glad that people can really get along here :)

[-] roastpotatothief@lemmy.ml 1 points 3 years ago

That's the defence of the "slur filter" that everyone can agree on. It's harmless because it does almost nothing. It has no real benefit or cost.

The people who say it deters fascists - it just doesn't hold water.#

this post was submitted on 26 Feb 2021
22 points (100.0% liked)

Announcements

23304 readers
2 users here now

Official announcements from the Lemmy project. Subscribe to this community or add it to your RSS reader in order to be notified about new releases and important updates.

You can also find major news on join-lemmy.org

founded 5 years ago
MODERATORS