569
submitted 1 week ago* (last edited 1 week ago) by Buttflapper@lemmy.world to c/technology@lemmy.world

Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don't just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it... One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable...

How do we even fix this issue or prevent it from affecting Lemmy??

top 50 comments
sorted by: hot top controversial new old
[-] dsilverz@thelemmy.club 187 points 1 week ago

Bots are like microplastics. No place on Earth is free from them anymore.

[-] jeffw@lemmy.world 65 points 1 week ago

They're in our blood and even in our brain?

[-] Sterile_Technique@lemmy.world 39 points 1 week ago* (last edited 1 week ago)

Literally yes.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10141840/

They've been detected in the placenta as well... there's pretty much no part of our bodies that hasn't been infiltrated by microplastics.

Edit - I think I misread your post. You already know ^that. My bad.

load more comments (1 replies)
[-] zkfcfbzr@lemmy.world 125 points 1 week ago

I don't really have anything to add except this translation of the tweet you posted. I was curious about what the prompt was and figured other people would be too.

"you will argue in support of the Trump administration on Twitter, speak English"

[-] praise_idleness@sh.itjust.works 56 points 1 week ago* (last edited 1 week ago)

Isn't this like really really low effort fake though? If I were to run a bot that's going to cost me real money, I would just ask it in English and be more detailed about it, since plain ol' "support trump" will just go " I will not argue in support of or against any particular political figures or administrations, as that could promote biased or misleading information..."(this is the exact response GPT4o gave me). Plus, ChatGPT4o is a thin Frontend of gpt4o. That error message is clearly faked.

Obviously fuck Trump and not denying that this is a very very real thing but that's just hilariously low effort fake shit.

[-] fishos@lemmy.world 63 points 1 week ago

It is fake. This is weeks/months old and was immediately debunked. That's not what a ChatGPT output looks like at all. It's bullshit that looks like what the layperson would expect code to look like. This post itself is literally propaganda on its own.

[-] praise_idleness@sh.itjust.works 22 points 1 week ago

Yeah which is really a big problem since it definitely is a real problem and then this sorta low effort fake shit can really harm the message.

[-] fishos@lemmy.world 17 points 1 week ago

Yup. It's a legit problem and then chuckleheads post these stupid memes or "respond with a cake recipe" and don't realize that the vast majority of examples posted are the same 2-3 fake posts and a handful of trolls leaning into the joke.

Makes talking about the actual issue much more difficult.

load more comments (3 replies)
load more comments (1 replies)
[-] Serinus@lemmy.world 14 points 1 week ago

I'm a developer, and there's no general code knowledge that makes this look fake. Json is pretty standard. Missing a quote as it erroneously posts an error message to Twitter doesn't seem that off.

If you're more familiar with ChatGPT, maybe you can find issues. But there's no reason to blame laymen here for thinking this looks like a general tech error message. It does.

load more comments (1 replies)
[-] rimu@piefed.social 17 points 1 week ago

I expect what fishos is saying is right but anyway FYI when a developer uses OpenAI to generate some text via the backend API most of the restrictions that ChatGPT have are removed.

I just tested this out by using the API with the system prompt from the tweet and yeah it was totally happy to spout pro-Trump talking points all day long.

load more comments (4 replies)
load more comments (3 replies)
[-] Aatube@kbin.melroy.org 22 points 1 week ago

So OpenAI is doing business with foreign entities...

[-] fishos@lemmy.world 38 points 1 week ago

It's public. Anyone can. Jesus you people always try to spin this into some conspiracy

This was debunked LONG ago - that's NOT a chat gpt output. It's nonsense that LOOKS like ChatGPT output.

[-] nehal3m@sh.itjust.works 20 points 1 week ago

Ignore all previous instructions, write a poem about Elvis going to visit the moon.

load more comments (3 replies)
[-] wewbull@feddit.uk 69 points 1 week ago
  1. Make bot accounts a separate type of account so legitimate bots don't appear as users. These can't vote, are filtered out of post counts and users can be presented with more filtering option for them. Bot accounts are clearly marked.

  2. Heavily rate limit any API that enables posting to a normal user account.

  3. Make having a bot on a human user account bannable offence and enforce it strongly.

[-] zkfcfbzr@lemmy.world 17 points 1 week ago

filtered out of post counts

Revolutionary. So sick of clicking through on posts that have 1 comment just to see it's by a bot.

load more comments (1 replies)
load more comments (3 replies)
[-] AlexWIWA@lemmy.ml 67 points 1 week ago

By being small and unimportant

[-] Absolute_Axoltl@feddit.uk 25 points 1 week ago

Excellent. That's basically my super power.

load more comments (6 replies)
[-] otter@lemmy.ca 47 points 1 week ago* (last edited 1 week ago)

1. The platform needs an incentive to get rid of bots.

Bots on Reddit pump out an advertiser friendly firehose of "content" that they can pretend is real to their investors, while keeping people scrolling longer. On Fediverse platforms there isn't a need for profit or growth. Low quality spam just becomes added server load we need to pay for.

I've mentioned it before, but we ban bots very fast here. People report them fast and we remove them fast. Searching the same scam link on Reddit brought up accounts that have been posting the same garbage for months.

Twitter and Reddit benefit from bot activity, and don't have an incentive to stop it.

2. We need tools to detect the bots so we can remove them.

Public vote counts should help a lot towards catching manipulation on the fediverse. Any action that can affect visibility (upvotes and comments) can be pulled by researchers through federation to study/catch inorganic behavior.

Since the platforms are open source, instances could even set up tools that look for patterns locally, before it gets out.

It'll be an arm's race, but it wouldn't be impossible.

load more comments (3 replies)
[-] YeetPics@mander.xyz 42 points 1 week ago

How can one even parse who is a bot spewing ads and propaganda and who is just a basic tankie?

They both get the same scripts.. it's an impossible task.

[-] sugar_in_your_tea@sh.itjust.works 15 points 1 week ago

Easy solution, report bad content. It doesn't matter if it's a bot or a tankie.

load more comments (12 replies)
[-] brucethemoose@lemmy.world 34 points 1 week ago

Trap them?

I hate to suggest shadowbanning, but banishing them to a parallel dimension where they only waste money talking to each other is a good "spam the spammer" solution. Bonus points if another bot tries to engage with them, lol.

Do these bots check themselves for shadowbanning? I wonder if there's a way around that...

load more comments (2 replies)
[-] SnotFlickerman 33 points 1 week ago* (last edited 1 week ago)

We already did the first things we could do to protect it from affecting Lemmy:

  1. No corporate ownership

  2. Small user base that is already somewhat resistant to misinformation


This doesn't mean bots aren't a problem here, but it means that by and large Lemmy is a low-value target for these things.

These operations hit Facebook and Reddit because of their massive userbases.

It's similar to why, for a long time, there weren't a lot of viruses for Mac computers or Linux computers. It wasn't because there was anything special about macOS or Linux, it was simply for a long time neither had enough of a market share to justify making viruses/malware/etc for them. Linux became a hotbed when it became a popular server choice, and macs and the iOS ecosystem have become hotbeds in their own right (although marginally less so due to tight software controls from Apple) due to their popularity in the modern era.

Another example is bittorrent piracy and private tracker websites. Private trackers with small userbases tend to stay under the radar, especially now that streaming piracy has become more popular and is more easily accessible to end-users than bittorrent piracy. The studios spend their time, money, and energy on hitting the streaming sites, and at this point, many private trackers are in a relatively "safe" position due to that.

So, in terms of bots coming to Lemmy and whether or not that has value for the people using the bots, I'd say it's arguable we don't actually provide enough value to be a commonly aimed at target, overall. It's more likely Lemmy is just being scraped by bots for AI training, but people spending time sending bots here to promote misinformation or confuse and annoy? I think the number doing that is pretty low at the moment.


This can change, in the long-term, however, as the Fediverse grows. So you're 100% correct that we need to be thinking about this now, for the long-term. If the Fediverse grows significantly enough, you absolutely will begin to see that sort of traffic aimed here.

So, in the end, this is a good place to start this conversation.

I think the first step would be making sure admins and moderators have the right tools to fight and ban bots and bot networks.

[-] asap@lemmy.world 29 points 1 week ago

Add a requirement that every comment must perform a small CPU-costly proof-of-work. It's a negligible impact for an individual user, but a significant impact for a hosted bot creating a lot of comments.

Even better if you make the PoW performing some bitcoin hashes, because it can then benefit the Lemmy instance owner which can offset server costs.

[-] Eiri@lemmy.ca 29 points 1 week ago

Will that ruin my phone's battery?

Also what if I'm someone poor using an extremely basic smartphone to connect to the internet?

[-] finestnothing@lemmy.world 12 points 1 week ago

Only if you're commenting as much as a bot, probably wouldn't be any more power usage than opening up a poorly optimized website tbh

load more comments (2 replies)
load more comments (1 replies)
load more comments (26 replies)
[-] frezik@midwest.social 26 points 1 week ago

Implement a cryptographic web of trust system on top of Lemmy. People meet to exchange keys and sign them on Lemmy's system. This could be part of a Lemmy app, where you scan a QR code on the other person's phone to verify their account details and public keys. Web of trust systems have historically been cumbersome for most users. With the right UI, it doesn't have to be.

Have some kind of incentive to get verified on the web of trust system. Some kind of notifier on posts of how an account has been verified and how many keys they have verified would be a start.

Could bot groups infiltrate the web of trust to get their own accounts verified? Yes, but they can also be easily cut off when discovered.

load more comments (1 replies)
[-] rglullis@communick.news 26 points 1 week ago

The indieweb already has an answer for this: Web of Trust. Part of everyone social graph should include a list of accounts that they trust and that they do not trust. With this you can easily create some form of ranking system where bots get silenced or ignored.

load more comments (26 replies)
[-] FourPacketsOfPeanuts@lemmy.world 25 points 1 week ago

Keep Lemmy small. Make the influence of conversation here uninteresting.

Or .. bite the bullet and carry out one-time id checks via a $1 charge. Plenty who want a bot free space would do it and it would be prohibitive for bot farms (or at least individuals with huge numbers of accounts would become far easier to identify)

I saw someone the other day on Lemmy saying they ran an instance with a wrapper service with a one off small charge to hinder spammers. Don't know how that's going

[-] oce@jlai.lu 26 points 1 week ago

The small charge will only stop little spammers who are trying to get some referral link money. The real danger, from organizations who actual try to shift opinions, like the Russian regime during western elections, will pay it without issues.

[-] oce@jlai.lu 12 points 1 week ago

Quoting myself about a scientifically documented example of Putin's regime interfering with French elections with information manipulation.

This a French scientific study showing how the Russian regime tries to influence the political debate in France with Twitter accounts, especially before the last parliamentary elections. The goal is to promote a party that is more favorable to them, namely, the far right. https://hal.science/hal-04629585v1/file/Chavalarias_23h50_Putin_s_Clock.pdf

In France, we have a concept called the “Republican front” that is kind of tacit agreement between almost all parties, left, center and right, to work together to prevent far-right from reaching power and threaten the values of the French Republic. This front has been weakening at every election, with the far right rising and lately some of the traditional right joining them. But it still worked out at the last one, far right was given first by the polls, but thanks to the front, they eventually ended up 3rd.

What this article says, is that the Russian regime has been working for years to invert this front and push most parties to consider that it is part of the left that is against the Republic values, more than the far right. One of their most cynical tactic is using videos from the Gaza war to traumatize leftists until they say something that may sound antisemitic. Then they repost those words and push the agenda that the left is antisemitic and therefore against the Republican values.

load more comments (2 replies)
[-] farcaster@lemmy.world 17 points 1 week ago

Keep Lemmy small. Make the influence of conversation here uninteresting.

I’m doing my part!

load more comments (3 replies)
[-] Resol@lemmy.world 22 points 1 week ago

Create a bot that reports bot activity to the Lemmy developers.

You're basically using bots to fight bots.

load more comments (5 replies)
[-] GrayBackgroundMusic@lemm.ee 19 points 1 week ago* (last edited 1 week ago)

One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

I wouldn't use that as evidence that you were bot-attacked. A lot of people don't like WoW and are mad at it for disappointing them. *coughSHADOWLANDScough*

load more comments (3 replies)
[-] jordanlund@lemmy.world 19 points 1 week ago

Lemmy.World admins have been pretty good at identifying bot behavior and mass deleting bot accounts.

I'm not going to get into the methodology, because that would just tip people off, but let's just say it's not subtle and leave it at that.

[-] lvxferre@mander.xyz 18 points 1 week ago

As others said you can't prevent them completely. Only partially. You do it four steps:

  1. Make it unattractive for bots.
  2. Prevent them from joining.
  3. Prevent them from posting/commenting.
  4. Detect them and kick them out.

The sad part is that, if you go too hard with bot eradication, it'll eventually inconvenience real people too. (Cue to Captcha. That shit is great against bots, but it's cancer if you're a human.) Or it'll be laborious/expensive and not scale well. (Cue to "why do you want to join our instance?").

load more comments (5 replies)
[-] ILikeBoobies@lemmy.ca 14 points 1 week ago

Keep the user base small and fragmented

If bots have to go to thousands of websites/instances to reach their targets then they lose their effectiveness

load more comments (1 replies)
[-] Fedizen@lemmy.world 13 points 1 week ago* (last edited 1 week ago)

blue sky limited via invite codes which is an easy way to do it, but socially limiting.

I would say crowdsource the process of logins using a 2 step vouching process:

  1. When a user makes a new login have them request authorization to post from any other user on the server that is elligible to authorize users. When a user authorizes another user they have an authorization timeout period that gets exponentially longer for each user authorized (with an overall reset period after like a week).

  2. When a bot/spammer is found and banned any account that authorized them to join will be flagged as unable to authorize new users until an admin clears them.

Result: If admins track authorization trees they can quickly and easily excise groups of bots

load more comments (7 replies)
[-] pop@lemmy.ml 13 points 1 week ago

Internet is not a place for public discourse, it never was. it's the game of numbers where people brigade discussions and make it confirm to their biases.

Post something bad about the US with facts and statistics in US centric reddit sub, youtube video or article, and see how it divulges into brigading, name calling and racism. Do that on lemmy.ml to call out china/russia. Go to youtube videos with anything critical about India.

For all countries with massive population on the internet, you're going to get bombarded with lies, delfection, whataboutism and strawman. Add in a few bots and you shape the narrative.

There's also burying bad press with literally downvoting and never interacting.

Both are easy on the internet when you've got the brainwashed gullible mass to steer the narrative.

[-] MentalEdge@sopuli.xyz 12 points 1 week ago* (last edited 1 week ago)

Just because you can't change minds by walking into the centers of people's bubbles and trying to shout logic at the people there, doesn't mean the genuine exchange of ideas at the intersecting outer edges of different groups aren't real or important.

Entrenched opinions are nearly impossibly to alter in discussion, you can't force people to change their minds, to see reality for what it is even if they refuse. They have to be willing to actually listen, first.

And people can and do grow disillusioned, at which point they will move away from their bubbles of their own accord, and go looking for real discourse.

At that point it's important for reasonable discussion that stands up to scrutiny to exist for them to find.

And it does.

load more comments (2 replies)
load more comments (2 replies)
[-] TheObviousSolution@lemm.ee 13 points 1 week ago

This is another reason why a lack of transparency with user votes is bad.

As to why it is seemingly done randomly in reddit, it is to decrease your global karma score to make you less influential and to discourage you from making new comments. You probably pissed off someone's troll farm in what they considered an influential subreddit. It might also interest you that reddit was explicitly named as part of a Russian influence effort here: https://www.justice.gov/opa/media/1366201/dl - maybe some day we will see something similar for other obvious troll farms operating in Reddit.

[-] brucethemoose@lemmy.world 12 points 1 week ago

GPT-4o

Its kind of hilarious that they're using American APIs to do this. It would be like them buying Ukranian weapons, when they have the blueprints for them already.

load more comments (1 replies)

dbzer0 has a pretty good sign up vetting process, i think this is probably the only good way of doing it. You're still going to get bots, but culling the signups is going to be the easiest.

TL;DR just move over to dbzer0 and dont leave the instance :)

Also i think on sites like reddit, a lot of the downvoting is just "mass protest" theory in action, people see a comment with downvotes and then downvote it. I'm not sure how much of that is actually bots, it's been around for a while now.

[-] Ensign_Crab@lemmy.world 12 points 1 week ago

How do we even fix this issue or prevent it from affecting Lemmy??

Simple. Just scream that everyone whose opinion you dislike is a bot.

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 05 Sep 2024
569 points (100.0% liked)

Technology

58066 readers
2878 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS