15
submitted 2 weeks ago by Slyke@lemmy.ml to c/lemmy_support@lemmy.ml

How can one configure their Lemmy instance to reject illegal content? And I mean the bad stuff, not just the NSFW stuff. There are some online services that will check images for you, but I'm unsure how they can integrate into Lemmy.

As Lemmy gets more popular, I'm worried nefarious users will post illegal content that I am liable for.

top 5 comments
sorted by: hot top controversial new old
[-] vk6flab@lemmy.radio 6 points 2 weeks ago

I am not a lawyer and I don't play one on the internet.

To my understanding the process is only prevented by controlling who can have an account on your instance.

That said, it's not clear to me how federated content is legally considered.

The only thing I can think of is running a bot on your instance that uses the API of a service such as what you mention to deal with such images.

Your post is the first one I've seen recently that is even describing the issue of liability, but it's in my opinion the single biggest concern that exists in the fediverse and it's why I've never hosted my own instance.

[-] db0@lemmy.dbzer0.com 5 points 2 weeks ago* (last edited 2 weeks ago)

https://github.com/db0/fedi-safety can scan images pre and post upload for you for CSAM, including novel GenAI ones. If you need pre-scanning, you will also need to run this service https://github.com/db0/pictrs-safety along with your instance. Both of these need a budget GPU to do the scans, but you can use your home PC.

[-] davel@lemmy.ml 4 points 2 weeks ago

There’s no such integration that I’m aware of. We rely on users reporting CSAM and such.

this post was submitted on 12 Jul 2025
15 points (100.0% liked)

Lemmy Support

4940 readers
1 users here now

Support / questions about Lemmy.

Matrix Space: #lemmy-space

founded 6 years ago
MODERATORS