59

As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

top 29 comments
sorted by: hot top controversial new old
[-] disguised_doge@kbin.earth 1 points 1 hour ago

There was already a wave of bots identified iirc. They were identified only because:

1 the bots had random letters for usernames

2 the bots did nothing but downvote, instantly downvoting every post by specific people who held specific opinions

Turned into a flamware, by the time I learned about it I think the mods had deleted a lot of the discussion. But, like the big tech platforms, the plan for bots likely is going to be "oh crap, we have no idea how to solve this issue." I don't intend to did the admins, bots are just a pain in the ass to stop.

[-] AceFuzzLord@lemm.ee 1 points 4 hours ago* (last edited 4 hours ago)

As far as I'm aware, there are no ways implemented. Got no idea because I'm not smart enough for this type of thing. The only solution I could think of is to implement a paywall (I know, disgusting) to raise the barrier to entry to try and keep bots out. That, and I don't know if it's currently possible, but making it so only people on your instance can comment, vote, and report posts on an instance.

I personally feel that depending on the price of joining, that could slightly lessen the bot problem for that specific instance since getting banned means you wasted money instead of just time. Though, it might also alienate it from growing as well.

[-] horse_battery_staple@lemmy.world 41 points 17 hours ago

We're not handling the LLM generative bullshit bots now, anywhere. There's a thing called the dead Internet theory. Essentially most of the traffic on the Internet now is bots.

https://en.m.wikipedia.org/wiki/Dead_Internet_theory

[-] Docus@lemmy.world 19 points 16 hours ago

It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.

[-] JubilantJaguar@lemmy.world 11 points 16 hours ago

Hardly. Just do away with coursework and stick to in-person exams and orals.

[-] wholookshere 8 points 14 hours ago* (last edited 14 hours ago)

Spoken by someone who has never felt with a learning dissability

[-] Docus@lemmy.world 3 points 12 hours ago

I don’t disagree, but it’s probably not that easy. Universities in my country don’t have the resources anymore to do many orals, and depending on the subject exams don’t test the same skills as coursework.

[-] Blaze@feddit.org 13 points 17 hours ago

I saw a comment the other day saying that "the line between the most advanced bot and the least talkative human is getting more and more thinner"

Which made me think: what if bots are setup to pretend to be actual users? With a fake life that they could talk about, fake anecdotes, fake hobbies, fake jokes but everything would seem legit and consistent. That would be pretty weird, but probably impossible to detect.

And then when that roleplaying bot once in a while recommends a product, you would probably trust them, after all they gave you advice for your cat last week.

Not sure what to do in that scenario, really

[-] spankmonkey@lemmy.world 17 points 17 hours ago

I've just accepted that if a bot interaction has the same impact on me as someone who is making up a fictional backstory, I'm not really worried wheter it is a bot or not. A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

In my opinion the main problem with bots is not individual acccounts pretending to be people, but the damage they can do en masse through a firehose of spam posts, comments, and manipulating engagement mechanics like up/down votes. At that point there is no need for an individual account to be convincing because it is lost in the sea of trash.

[-] poVoq@slrpnk.net 8 points 16 hours ago

Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.

[-] Danterious@lemmy.dbzer0.com 2 points 13 hours ago* (last edited 12 hours ago)

Maybe we should look for ways of tracking coordinated behaviour. Like a definition I've heard for social media propaganda is "coordinated inauthentic behaviour" and while I don't think it's possible to determine if a user is being authentic or not, it should be possible to see if there is consistent behaviour between different kind of users and what they are coordinating on.

Edit: Because all bots do have purpose eventually and that should be visible.

Edit2: Eww realized the term came from Meta. If someone has a better term I will use that instead.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

[-] drkt@lemmy.dbzer0.com 2 points 14 hours ago

I am convinced that the bidet shills on reddit are bots. There's just no way that hundreds of thousands of people are suddenly interested in shitting appliances.

There's an easy way to tell.

If they're talking about a bidet without a heater, they're a bot because no human on earth wants an ass spraying of cold water.

[-] Blaze@feddit.org 2 points 14 hours ago
[-] drkt@lemmy.dbzer0.com 2 points 14 hours ago

You might consider me an independent thinker (I shit in the woods)

[-] ericjmorey@discuss.online 2 points 15 hours ago

A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

It's the scale that changes. One bot can be replicated much easier than a human shill.

[-] spankmonkey@lemmy.world 1 points 12 hours ago

So my second paragraph...

[-] jordanlund@lemmy.world 9 points 17 hours ago

I think smarter people than me will have to figure it out and even then it's going to be a war of escalation. Ban the bots, build better bots, back and forth back and forth.

Some news sites had an interesting take on comments sections. Before you could comment on an article, you had to correctly answer a 5 question quiz proving you actually read it.

But AI can do that now too.

[-] Blaze@feddit.org 11 points 17 hours ago

Some news sites had an interesting take on comments sections. Before you could comment on an article, you had to correctly answer a 5 question quiz proving you actually read it.

It would be interesting to try that on Lemmy for a day. People would probably not be happy.

[-] subignition@piefed.social 3 points 14 hours ago

As divisive as it would be, I think that would be a good thing overall...

It reminds me of the literacy test to use Kingdom of Loathing's chat features.

[-] FaceDeer@fedia.io 4 points 14 hours ago

Not only can AI do that, it probably does it far better than a human would.

I like XKCD's solution. Aside from the fact that it would heavily reinforce whatever bubble each community lived in, of course.

[-] AmidFuror@fedia.io 6 points 15 hours ago

To manage advanced bots, platforms like Lemmy should:

  • Verification: Implement robust account verification and clearly label bot accounts.
  • Behavioral Analysis: Use algorithms to identify bot-like behavior.
  • User Reporting: Enable easy reporting of suspected bots by users.
  • Rate Limiting: Limit posting frequency to reduce spam.
  • Content Moderation: Enhance tools to detect and manage bot-generated content.
  • User Education: Provide resources to help users recognize bots.
  • Adaptive Policies: Regularly update policies to counter evolving bot tactics.

These strategies can help maintain a healthier online community.

[-] kbal@fedia.io 5 points 14 hours ago

Did an AI write that, or are you a human with an uncanny ability to imitate their style?

[-] AmidFuror@fedia.io 4 points 13 hours ago

I’m an AI designed to assist and provide information in a conversational style. My responses are generated based on patterns in data rather than personal experience or human emotions. If you have more questions or need clarification on any topic, feel free to ask!

[-] ademir@lemmy.eco.br 3 points 15 hours ago

Verification: Implement robust account verification and clearly label bot accounts.

☑ Clear label for bot accounts
☑ 3 different levels of captcha verification (I use the intermediary level in my instance and rarely deal with any bot)

Behavioral Analysis: Use algorithms to identify bot-like behavior.

Profiling algorithms seems like something people are running away when they choose fediverse platforms, this kind of solution have to be very well thought and communicated.

User Reporting: Enable easy reporting of suspected bots by users.

☑ Reporting in lemmy is just as easy as anywhere else.

Rate Limiting: Limit posting frequency to reduce spam.

☑ Like this?

image

Content Moderation: Enhance tools to detect and manage bot-generated content.

What do you suggest other than profiling accounts?

User Education: Provide resources to help users recognize bots.

This is not up to Lemmy development team.

Adaptive Policies: Regularly update policies to counter evolving bot tactics.

Idem.

[-] Crumbgrabber@lemm.ee 3 points 14 hours ago

"We should join them. It would be wise, Gandalf. There is hope that way."

[-] TimLovesTech@badatbeing.social 2 points 12 hours ago

For commercial services like Twitter or Reddit the bots make sense because it lets the platforms have inflated "user" numbers while also more random nonsense to sell ads against.

But for the fediverse, the goals would be, post random stuff into the void and profit?? Like I guess you could long game some users into a product that they only research on the fediverse, but seems more cost effective for the botnets to attack the commercial networks first.

[-] distantsounds@lemmy.world 5 points 11 hours ago

There is a lot to be gained by politically astroturfing, and that is already widespread in the fediverse

[-] Magister@lemmy.world 1 points 12 hours ago

We are already invaded by bots, look at this https://beehaw.org/c/technology@lemmy.ml

this post was submitted on 16 Oct 2024
59 points (100.0% liked)

Fediverse

28050 readers
436 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS