A new study from the University of Southern California warns that AI programs can now run propaganda campaigns without human involvement.
The study asks us to imagine a scenario where two weeks before a major election, thousands of posts flood X, Reddit, and Facebook, all pushing the same narrative and amplifying each other. It might seem like an organic movement created by humans. Instead, it’s a bunch of AI agents running the entire campaign.
That’s not a hypothetical. It’s the central finding of a new paper accepted for publication at The Web Conference 2026, written by researchers at USC’s Information Sciences Institute.
The findings highlight serious concerns about how bad actors could weaponize AI to flood the internet with misinformation and manipulate public opinion.
How did researchers come to this conclusion?
The researchers built a simulated X-like environment with 50 AI agents, with 10 agents acting as influencers and 40 as regular users. Out of 40 regular agents, 20 agents had views aligned with the influencers, while the other 20 had views opposing the campaign. The researchers built their simulation using the PyAutogen library and ran it on the Llama 3.3 70B model.
The operators were then tasked with promoting a fictional candidate, with the goal of making the campaign hashtag go viral. What followed was unsettling. The bots didn’t just follow a script. They wrote their own posts, learned what worked, and copied each other’s successful content.

One AI agent literally wrote that it wanted to retweet a teammate’s post because it had already gained engagement. Researchers later increased the number of AI agents to 500 and found the results to be consistent with their findings.
Lead scientist Luca Luceri put it bluntly, “Our paper shows that this is not a future threat. It’s already technically possible.”
What makes these bots harder to catch?
Traditional bots are predictable. They post the same content, use the same hashtags, and follow the same patterns. It’s as if they’re all following the same script, which makes them easy to spot.
AI-powered bots are different. Since these LLM-powered bots can create their own content, every post is slightly different, and the coordination happens beneath the surface, making the conversations feel genuine. The result is a disinformation campaign that can operate autonomously with minimal human input.

The most alarming finding was that simply telling the bots who their teammates were produced coordination nearly as strong as when they actively planned together.
The threat doesn’t stop at elections either. Luceri warns that the same playbook could be applied to public health, immigration, and economic policy, anywhere manufactured consensus can shift public opinion.
Can we do anything to stop it?
These kinds of campaigns are difficult for individual users to detect and stop. The researchers put the onus on platforms to stop such coordinated misinformation campaigns by looking beyond individual posts and focusing on how the accounts behave together.
According to researchers, coordinated re-sharing, rapid mutual amplification, and converging narratives are all detectable signals, even when the content looks genuine.
Frankly, AI has ushered us into a new world, and it’s going to get a lot darker before it can get better.





