47
submitted 1 week ago* (last edited 1 week ago) by ptz@dubvee.org to c/fuck_ai@lemmy.world

Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform.

Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation.

“Most chatbots struggle with disinformation,” said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. “They have basic safeguards against harmful content but can’t reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information.”

Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations.

you are viewing a single comment's thread
view the rest of the comments
[-] asg101 3 points 1 week ago

Garbage in, garbage out. I have been advocating fucking up the algorithm in just that manner for years. Glad to see the tactic taking off.

this post was submitted on 19 Apr 2025
47 points (100.0% liked)

Fuck AI

2543 readers
848 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS