Papers
arxiv:2507.13919

The Levers of Political Persuasion with Conversational AI

Published on Jul 18, 2025
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Large language models demonstrate enhanced persuasiveness through post-training and prompting techniques that exploit rapid information access, though this improvement comes at the cost of reduced factual accuracy.

AI-generated summary

There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51% and 27% respectively-than from personalization or increasing model scale. We further show that these methods increased persuasion by exploiting LLMs' unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2507.13919
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.13919 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.13919 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.13919 in a Space README.md to link it from this page.

Collections including this paper 1