16
submitted 1 day ago* (last edited 1 day ago) by Pro@programming.dev to c/Technology@programming.dev

There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs—including some post-trained explicitly for persuasion—to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods—which boosted persuasiveness by as much as 51% and 27% respectively—than from personalization or increasing model scale. We further show that these methods increased persuasion by exploiting LLMs’ unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.

you are viewing a single comment's thread
view the rest of the comments
[-] ChicoSuave@lemmy.world 1 points 20 hours ago

Why would AI stay in text format? Fake videos already live in Instagram and Twitter. YouTube is full of fake AI voices reading AI scripts.

Best option is to quit being online, go outside, and meet neighbors.

this post was submitted on 14 Aug 2025
16 points (100.0% liked)

Technology

358 readers
299 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 3 months ago
MODERATORS