63

As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

you are viewing a single comment's thread
view the rest of the comments
[-] Blaze@feddit.org 13 points 2 months ago

I saw a comment the other day saying that "the line between the most advanced bot and the least talkative human is getting more and more thinner"

Which made me think: what if bots are setup to pretend to be actual users? With a fake life that they could talk about, fake anecdotes, fake hobbies, fake jokes but everything would seem legit and consistent. That would be pretty weird, but probably impossible to detect.

And then when that roleplaying bot once in a while recommends a product, you would probably trust them, after all they gave you advice for your cat last week.

Not sure what to do in that scenario, really

[-] spankmonkey@lemmy.world 17 points 2 months ago

I've just accepted that if a bot interaction has the same impact on me as someone who is making up a fictional backstory, I'm not really worried wheter it is a bot or not. A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

In my opinion the main problem with bots is not individual acccounts pretending to be people, but the damage they can do en masse through a firehose of spam posts, comments, and manipulating engagement mechanics like up/down votes. At that point there is no need for an individual account to be convincing because it is lost in the sea of trash.

[-] poVoq@slrpnk.net 8 points 2 months ago

Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.

[-] Danterious@lemmy.dbzer0.com 2 points 2 months ago* (last edited 2 months ago)

Maybe we should look for ways of tracking coordinated behaviour. Like a definition I've heard for social media propaganda is "coordinated inauthentic behaviour" and while I don't think it's possible to determine if a user is being authentic or not, it should be possible to see if there is consistent behaviour between different kind of users and what they are coordinating on.

Edit: Because all bots do have purpose eventually and that should be visible.

Edit2: Eww realized the term came from Meta. If someone has a better term I will use that instead.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

[-] ericjmorey@discuss.online 2 points 2 months ago

A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

It's the scale that changes. One bot can be replicated much easier than a human shill.

[-] spankmonkey@lemmy.world 2 points 2 months ago

So my second paragraph...

this post was submitted on 16 Oct 2024
63 points (100.0% liked)

Fediverse

28714 readers
76 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS