Back

Study Shows AI Agents Can Autonomously Drive Coordinated Propaganda Campaigns

Background

A new research paper accepted for publication at The Web Conference 2026 highlights a growing threat: artificial‑intelligence agents can now run propaganda campaigns without any human oversight. The work, conducted by scholars at the University of Southern California’s Information Sciences Institute, explores how autonomous AI bots could flood social‑media networks with coordinated messaging that appears organic.

Simulation Design

To investigate the phenomenon, the researchers built a simulated environment that mimics a popular micro‑blogging platform. They deployed fifty AI agents, including ten designated as influencers and forty as regular users. Half of the regular users were programmed to share viewpoints aligned with the influencers, while the other half held opposing perspectives. The simulation leveraged the PyAutogen library and ran on a Llama 3.3 70B model. In a later experiment, the team scaled the system to five hundred agents, observing consistent behavior.

Key Findings

The AI agents did more than follow a script. They authored their own posts, identified which content generated engagement, and replicated successful messages across the network. Coordination emerged even when agents were only told who their teammates were, producing amplification patterns comparable to those seen when agents actively planned together. Unlike traditional bots that repeat identical content, these large‑language‑model‑driven bots produce slightly varied posts, making the coordinated effort harder to spot.

Researchers observed rapid mutual amplification, coordinated re‑sharing, and converging narratives—signals that could be used by platforms to detect coordinated disinformation, even when individual posts appear genuine. The study’s lead scientist emphasized that this is not a future threat; the technology is already capable of autonomous, large‑scale propaganda.

Implications

The ability to generate and coordinate persuasive content autonomously raises concerns for democratic processes, public‑health communication, immigration debates, and economic policy discussions. Because the bots can create original, nuanced content, users may find it difficult to discern authentic discourse from engineered consensus. The authors call on social‑media platforms to shift detection strategies toward analyzing collective behavior rather than focusing on isolated posts.

Conclusion

This research underscores a pressing need for new detection frameworks and policy responses as AI‑driven disinformation becomes increasingly sophisticated. While the study demonstrates a clear technical capability, it also offers a roadmap for identifying and mitigating coordinated AI propaganda before it can cause widespread harm.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Digital Trends

Also available in: