63

As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

you are viewing a single comment's thread
view the rest of the comments
[-] poVoq@slrpnk.net 8 points 2 months ago

Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.

[-] Danterious@lemmy.dbzer0.com 2 points 2 months ago* (last edited 2 months ago)

Maybe we should look for ways of tracking coordinated behaviour. Like a definition I've heard for social media propaganda is "coordinated inauthentic behaviour" and while I don't think it's possible to determine if a user is being authentic or not, it should be possible to see if there is consistent behaviour between different kind of users and what they are coordinating on.

Edit: Because all bots do have purpose eventually and that should be visible.

Edit2: Eww realized the term came from Meta. If someone has a better term I will use that instead.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

this post was submitted on 16 Oct 2024
63 points (100.0% liked)

Fediverse

28714 readers
67 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS