802
submitted 1 year ago* (last edited 1 year ago) by db0@lemmy.dbzer0.com to c/selfhosted@lemmy.world

I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we're not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

top 50 comments
sorted by: hot top controversial new old
[-] veroxii@aussie.zone 76 points 1 year ago

This is extremely cool.

Because of the federated nature of Lemmy many instances might be scanning the same images. I wonder if there might be some way to pool resources that if one instance has already scanned an image some hash of it can be used to identify it and the whole AI model doesn't need to be rerun.

Still the issue of how do you trust the cache but maybe there's some way for a trusted entity to maintain this list?

[-] irdc@derp.foo 21 points 1 year ago* (last edited 1 year ago)

How about a federated system for sharing “known safe” image attestations? That way, the trust list is something managed locally by each participating instance.

Edit: thinking about it some more, a federated image classification system would allow some instances to be more strict than others.

[-] gabe@literature.cafe 26 points 1 year ago

I think building such a system of some kind that can allow smaller instances to rely from help from larger instances would be extremely awesome.

Like, lemmy has the potential to lead the fediverse is safety tools if we put the work in.

[-] huginn@feddit.it 14 points 1 year ago

Consensus algorithms. But it means there will always be duplicate work.

No way around that unfortunately

[-] kbotc@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

Why? Use something like RAFT, elect the leader, have the leader run the AI tool, then exchange results, with each node running it’s own subset of image hashes.

That does mean you need a trust system, though.

[-] irdc@derp.foo 9 points 1 year ago

As I'm saying, I don't think you need to: manually subscribing to each trusted instance via ActivityPub should suffice. The pass/fail determination can be done when querying for known images.

load more comments (2 replies)
load more comments (1 replies)
[-] neutron@thelemmy.club 13 points 1 year ago

I'd rather have a text-only instance with no media at all. Can this be done?

[-] Rentlar@lemmy.ca 17 points 1 year ago

Yes it is definitely possible! Just have no pictrs installed/running with the server. Note it will still be possible to link external images.

[-] Morgikan@lemm.ee 12 points 1 year ago

My understanding was it's bad practice to host images on Lemmy instances anyway as it contributes to storage bloat. Instead of coming up with a one-off script solution (albeit a good effort), wouldn't it make sense to offload the scanning to a third party like imgur or catbox who would already be doing that and just link images into Lemmy? If nothing else wouldn't that limit liability on the instance admins?

load more comments (2 replies)
[-] Starbuck@lemmy.world 11 points 1 year ago

TBH, I wouldn’t be comfortable outsourcing the scanning like that if I were running an instance. It only takes a bit of resources to know that you have done your due diligence. Hopefully this can get optimized to get time to be faster.

[-] sunaurus@lemm.ee 45 points 1 year ago

As a test, I ran this on a very early backup of lemm.ee images from when we had very little federation and very little uploads, and unfortunately it is finding a whole bunch of false positives. Just some examples it flagged as CSAM:

  • Calvin and Hobbes comic
  • The default Lemmy logo
  • Some random user's avatar, which is just a digital drawing of a person's face
  • a Pikachu image

Do you think the parameters of the script should be tuned? I'm happy to test it further on my backup, as I am reasonably certain that it doesn't contain any actual CSAM

[-] db0@lemmy.dbzer0.com 51 points 1 year ago* (last edited 1 year ago)

This is normal . You should be worried if it wasn't catching any false positives as it would mean a lot of false negatives would slip though. I am planning to add args to make it more or less severe, but I it will never be perfect. So long as it's not catching most images, and of the false positives most are porn or contain children, I consider with worthwhile.

I'll let you know when the functionality for he severity is updated

[-] Decronym@lemmy.decronym.xyz 39 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CF CloudFlare
CSAM Child Sexual Abuse Material
DNS Domain Name Service/System
HTTP Hypertext Transfer Protocol, the Web
nginx Popular HTTP server

4 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #88 for this sub, first seen 28th Aug 2023, 22:25] [FAQ] [Full list] [Contact] [Source code]

[-] morrowind@lemmy.ml 10 points 1 year ago
load more comments (2 replies)
[-] CaptainBlagbird@lemmy.world 37 points 1 year ago

How do you even safely test scripts/tools like this 😵‍💫

[-] hackitfast@lemmy.world 27 points 1 year ago

I'd bet there's a CSAM test image dataset with innocuous images that get picked up by the script. Not sure how the system works, but if it's through hashes then it would be pretty simple to add that to the script.

[-] cyborganism@lemmy.ca 28 points 1 year ago

I don't host a server myself, but can this tool identify the users who posted the images and create a report with their IP addresses?

This could help identify who spreads that content and it can be used to notify authorities. No?

[-] db0@lemmy.dbzer0.com 30 points 1 year ago

No but it will record the object storage We then need a way to connect that path to the pict-rs image ID, and once we do that, connect the pict-rs image ID to the comment or post which uploaded it. I don't know how to do the last two steps however, so hopefully someone else will step up for this

[-] mustardman@discuss.tchncs.de 24 points 1 year ago

Thank you for helping make the fediverse a better place.

[-] bdonvr@thelemmy.club 22 points 1 year ago

Worth noting you seem to be missing dependencies in requirements.txt notably unidecode and strenum

Also that this only works with GPU acceleration on NVidia (maybe, I messed around with trying to get it to work with AMD ROCm instead of CUDA but didn't get it running)

[-] db0@lemmy.dbzer0.com 10 points 1 year ago

Ah thanks. I'll add them

load more comments (2 replies)
[-] A10@kerala.party 22 points 1 year ago

Don't have a GPU on my server. How is performance on the CPU ?

[-] db0@lemmy.dbzer0.com 43 points 1 year ago

It will be atrocious. You can run it, but you'll likely be waiting for weeks if not months.

[-] Rescuer6394@feddit.nl 11 points 1 year ago

the model under the hood is clip interrogator, and it looks like it is just the torch model.

it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.

[-] db0@lemmy.dbzer0.com 11 points 1 year ago

sure, or a .cpp. But it will still not be anywhere near as good as a GPU. However it might be sufficient for something just checking new images

load more comments (2 replies)
[-] rcmaehl@lemmy.world 18 points 1 year ago* (last edited 1 year ago)

Hi db0, if I could make an additional suggestion.

Add detection of additional content appended or attached to media files. Pict-rs does not reprocess all media types on upload and it's not hard to attach an entire .zip file or other media within an image (https://wiki.linuxquestions.org/wiki/Embed_a_zip_file_into_an_image)

[-] db0@lemmy.dbzer0.com 19 points 1 year ago

Currently I delete on PIL exceptions. I assume if someone uploaded a .zip to your image storage, you'd want it deleted

[-] Starbuck@lemmy.world 9 points 1 year ago

The fun part is that it’s still a valid JPEG file if you put more data in it. The file should be fully re-encoded to be sure.

load more comments (4 replies)
load more comments (1 replies)
[-] FriendlyBeagleDog 18 points 1 year ago

Not well versed in the field, but understand that large tech companies which host user-generated content match the hashes of uploaded content against a list of known bad hashes as part of their strategy to detect and tackle such content.

Could it be possible to adopt a strategy like that as a first-pass to improve detection, and reduce the compute load associated with running every file through an AI model?

[-] dan@upvote.au 16 points 1 year ago* (last edited 1 year ago)

match the hashes

It's more than just basic hash matching because it has to catch content even if it's been resized, cropped, reduced in quality (lower JPEG quality with more artifacts), colour balance change, etc.

[-] crunchpaste@lemmy.dbzer0.com 13 points 1 year ago

Well, we have hashing algorithms that do exactly that, like phash for example.

load more comments (4 replies)
load more comments (1 replies)
[-] Ozzy@lemmy.ml 16 points 1 year ago

based db0 releasing great tools and maintaining a great community

[-] db0@lemmy.dbzer0.com 16 points 1 year ago
[-] Rentlar@lemmy.ca 14 points 1 year ago

Hey db0 thanks for putting in extra effort to help the community (as you have multiple times) when big issues like this crop up on Lemmy.

Despite being a pressing issue this is one that people also are a little reluctant to help solve because of fear of getting in trouble themselves. (How can a server admin develop a method to detect and remove/prevent CSAM distribution without accessing known examples which is extremely illegal?)

Another time being the botspam wave where you developed Overseer in response very quickly. I'm hoping here too devs will join you to work out how to best implement the changes into Lemmy to combat this problem.

[-] chrisbit@leminal.space 10 points 1 year ago* (last edited 1 year ago)

Thanks for releasing this. After doing a --dry_run can the flagged files then be removed without re-analysing all images?

[-] db0@lemmy.dbzer0.com 14 points 1 year ago

Not currently supported. It's on my to-do

[-] donut4ever@lemm.ee 9 points 1 year ago

This is awesome. Thank you for making it.

[-] sunaurus@lemm.ee 9 points 1 year ago* (last edited 1 year ago)

Any thoughts about using this as a middleware between nginx and Lemmy for all image uploads?

Edit: I guess that wouldn't work for external images - unless it also ran for all outgoing requests from pict-rs.. I think the easiest way to integrate this with pict-rs would be through some upstream changes that would allow pict-rs itself to call this code on every image.

[-] db0@lemmy.dbzer0.com 10 points 1 year ago

Exactly. If the pict-rs dev allowed us to run an executable on each image before accepting it, it would make things much easier

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 28 Aug 2023
802 points (100.0% liked)

Selfhosted

40313 readers
205 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS