47
submitted 2 days ago* (last edited 2 days ago) by rimu@piefed.social to c/piefed_meta@piefed.social

A way for people to screen out AI generated content, similar how people can screen out nsfw content already.

It works the same as the nsfw filter except underneath it's an integer instead of a Boolean.

Content authors will flag their content as AI-generated, on a sliding scale from 0 (no AI) to 100 (completely AI generated. Intermediate values could be used too although this amount of nuance may be confusing for some.

Users will be able to set a "AI generated threshold" which filters out content above that threshold. At the UI level this could be presented as a checkbox but maybe a slider would be good.

Mods need to be able to set the AI level on content, as they do now with NSFW content.

Communities will have an AI-gen value too, which is automatically applied to all content within. Instance admins can override this value too, for local or remote communities.

Thoughts?

top 25 comments
sorted by: hot top controversial new old
[-] nokturne213@sopuli.xyz 23 points 2 days ago

I love this idea, but I see it being ignored, much the way language is ignored on lemmy.

[-] hono4kami@piefed.social 1 points 2 days ago

Would you mind explaining what you meant by "much the way language is ignored on lemmy"?

[-] rimu@piefed.social 5 points 2 days ago

Sometimes Lemmy users do not assign a language to their post or they assign the wrong language.

[-] FundMECFSResearch 3 points 2 days ago

And some UIs (like voyager) completely remove the language options.

[-] karasu_sue@pf.korako.me 11 points 2 days ago

I use AI for all of my English posts, because I can't write in English myself.
I write in Japanese and have AI translate it into English before posting—this post included.

In cases like this, would I be expected to mark my post as AI-generated?
Also, is using translation to post generally frowned upon?

Maybe I'm just overthinking it though.

[-] rimu@piefed.social 2 points 2 days ago

This would primarily be used for images.

IMO translation with AI is not worth putting a special filter on.

Also if someone with dyslexia or some other neuro-spicy condition needs a little help with grammar, that's fine.

There is a spectrum of AI use, which is why I proposed using a number for the value rather than a binary "Is AI" is "is not AI".

At one end of the spectrum could be an image made from a text prompt which would be considered 100% AI.

A photograph edited in Photoshop with the AI tool that replaces the background with a sunset could be flagged as 50% AI.

Maybe something I write and then ask ChatGPT to fix up the grammar and weird personal idiosyncrasies could be 20% AI.

Something like that.

[-] karasu_sue@pf.korako.me 2 points 2 days ago

I see—personally, I think it's a good idea when it comes to images.
While it's true that not all users will flag their content properly, moderators can still add the flag afterward.

It's one of those things where it's better to have it than not.

[-] GenosseFlosse@feddit.org 1 points 2 days ago

I would stay away from ChatGpt for translations, at least for English to German. In the latest iOS demo the live translation had "i need your help organizing my wedding" translated into German "i need you for my wedding". I have seen other examples with Chinese scam shops Auto translated which always stick out. Same with some games where the texts sometimes don't make sense.

[-] astro_ray@piefed.social 8 points 2 days ago

Honestly, seems like too much of a hassle compared to the benefit. I don't want to flag my content as "not ai" every time I post something. And I don't see as much AI to justify the effort. I am probably too lazy for this feature.

[-] rimu@piefed.social 9 points 2 days ago

"Not ai" would be the default, you wouldn't need to do anything. Just like "not NSFW" is the default.

[-] borisentiu@piefed.social 3 points 2 days ago

'No AI' could be the default.

[-] Deceptichum@quokk.au 6 points 2 days ago

As someone who posts AI made stuff often, I would not use it on principle. After experiencing dbzer0s recent rule to flag AI posts, I found it led to a dramatic increase in negative and hateful comments.

[-] rimu@piefed.social 3 points 2 days ago* (last edited 2 days ago)

Theoretically eventually some PieFed-based communities would penalize you for not using it, just as we penalize those who post NSFW content without flagging it - depending on the rules of the community / instance.

This wouldn't happen overnight - at first there will be so many people still using Lemmy, etc which does not have this feature, so 99% of posts won't have any Gen AI flag and enforcing such a rule would instantly tank any community. At first.

But even so it will be yet another reason to switch to PieFed, which I am all in favor of.

[-] Deceptichum@quokk.au 2 points 2 days ago

It’s more the users than the instances I’m worried about. Any sort of identifier would be used to harass anyone posting it.

[-] rimu@piefed.social 1 points 2 days ago* (last edited 2 days ago)

In the end I would hope that AI content creators benefit too, by having an audience that is seeking that kind of content or is at least ambivalent about the medium of the message. Also there would be less negative reactions because people who really want to avoid it would not see it anyway.

[-] Ek-Hou-Van-Braai@piefed.social 3 points 2 days ago

I like the idea, it would also be great to add a "Politics" filter, like we have for NSFW content, so that users can filter that out.

[-] danzabia@infosec.pub 3 points 2 days ago

I would guess None, Some, All would be sufficient since people's notion of where on the slider they should land will be highly variable anyway. Then users can filter on None, Some, All.

I guess I haven't noticed much AI content, but maybe it's just my subs -- is it mainly art-related? Or memes or something?

[-] rimu@piefed.social 3 points 2 days ago

Other social networks are far more overrun with AI.

Facebook recently has a requirement to flag all AI content. https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/

[-] danzabia@infosec.pub 1 points 2 days ago

I see. I don't really use other social media so I'm out of the loop. What do you think the long run (5-10 years) looks like, as far as social network administration and moderation?

[-] rimu@piefed.social 1 points 2 days ago* (last edited 2 days ago)

Hard to say, it's not really my main area of expertise. I just code stuff.

Seems like as things scale the problems moderation gets exponentially harder . If the fediverse stays the same size or shrinks, we'll be fine with the tools we have now.

It would be nice if we could build the beginnings of stronger capabilities now, though. Then if an explosion of growth happens we'll cope a little better. FediThreat and FIRES are interesting.

[-] hitagi@ani.social 2 points 2 days ago

I'd prefer to just leave this as disclaimers in the flair, title, or body. A percentage is awkward to use. I can imagine users setting it to 1% just to get around people's content filters. I just don't think people will be honest with a tool like this.

I assume this is for AI art communities, in which case I think anti-AI users would already block. I'd prefer self-assignable community tags. An AI community could tag itself with #ai and users with a global blocklist could just block #ai. Somebody here mentioned wanting a "politics filter" and you could apply that here too. Communities can assign themselves with #politics.

On top of that it can be like how Lemmy communities already post to Mastodon or Misskey using hashtags of their names (e.g. !gundam@ani.social posts show up in #gundam).

Anyway, I think most AI stuff stay within their own communities. I don't often see both AI and non-AI stuff within the same community.

[-] rimu@piefed.social 2 points 2 days ago

Yeah just doing it at the community level might be simplest.

[-] Snoopy@piefed.social 2 points 2 days ago

Some website use AI generated content, we can filter them out.

That's a great idea. I like the scale :)

On accessibility & AI :
Some users share a podcast/video without any transcription or subtitle. We could use whisper to generate a text file and put it into a collaborative mode to improve the text (similar to wiki)

Maybe the ability to tag it :

  • AI translation
  • AI translation & reread by human
[-] Solano@piefed.social 2 points 2 days ago

AI definitely needs a method to identify it, to distinguish it from real work, and possibly used as a filter for those that don't want to see it, or for those that prefer to see it. I think a simple check box would be sufficient, because we don't need the degree of AI, only that AI is being used. I like that every post requires a language selection, where there is a default, and you can choose to change it. I'm guessing posters and commenters selecting the wrong language will or would be punished/moderated. The same should apply to AI posts and comments, at the discretion of the moderators of their respective communities. Having the functionality is huge, even if some communities do not want use it.

[-] borisentiu@piefed.social 2 points 2 days ago

I like the idea!

this post was submitted on 27 Jun 2025
47 points (100.0% liked)

PieFed Meta

1115 readers
47 users here now

Discuss PieFed project direction, provide feedback, ask questions, suggest improvements, and engage in conversations related to the platform organization, policies, features, and community dynamics.

Wiki

founded 2 years ago
MODERATORS