Online propaganda campaigns are using ‘AI slop’, researchers say


Many of the largest and most widely established state-sponsored online propaganda campaigns have embraced artificial intelligence – and they are often bad at it, according to a new report.
The report, by social media analytics firm Graphika, analyzed nine ongoing online influence operations – including those it says are affiliated with the Chinese and Russian governments – and found that each, like most social media, has increasingly adopted generative AI to create images, videos, text and translations.
Researchers have found that sponsors of propaganda campaigns are now relying on AI for core functions such as content creation and creating influencer personalities on social media, streamlining some campaigns. But researchers say the content is low quality and gets little engagement.
The findings run counter to what many researchers had anticipated with the growing sophistication of generative AI – artificial intelligence that mimics human speech, writing and images in images and videos. Technology has become rapidly more advanced in recent years, and some experts have warned that propagandists working on behalf of authoritarian countries will embrace compelling, high-quality synthetic content designed to fool even the most discerning people in democratic societies.
However, Graphika researchers have resoundingly discovered that the AI content created by these established campaigns is of poor quality, ranging from unconvincing synthetic journalists in YouTube videos to clunky translations or fake news sites that accidentally include AI prompts in headlines.
“Influence operations routinely incorporate AI tools, and a lot of it is cheap, low-quality AI garbage,” said Dina Sadek, principal analyst at Graphika and co-author of the report. As was the case before such campaigns began routinely using AI, the vast majority of their posts on Western social media receive little or no attention, she said.
Online influence campaigns aimed at swaying U.S. politics and spreading divisive messages date back at least a decade, when the Russia-based Internet Research Agency created numerous Facebook and Twitter accounts and attempted to influence the 2016 presidential election.
As in other fields, such as cybersecurity and programming, the rise of AI has not revolutionized the field of online propaganda, but it has made it easier to automate certain tasks, Sadek said.
“It may be low-quality content, but it’s very scalable at scale. They can just sit there, maybe an individual pushing buttons there, to create all this content,” she said.
Examples cited in the report include “Doppelganger,” an operation the Justice Department linked to the Kremlin, which researchers say used AI to create unconvincing fake news websites, and “Spamoflauge,” which the Justice Department linked to China and which creates fake AI news influencers to spread controversial but unconvincing videos on social media sites like X and YouTube. The report cites several operations using low-quality deepfake audio.
One example posted deepfakes of celebrities like Oprah Winfrey and former President Barack Obama, appearing to comment on India’s rise in global politics. But the report said the videos proved unconvincing and did not have much traction.
Another pro-Russian video, titled “The Olympics have fallen,” appeared to be designed to denigrate the 2024 Summer Olympics in Paris. A nod to the 2013 Hollywood film “Olympus Has Fallen,” it featured an AI-generated version of Tom Cruise, who did not appear in either film. The report reveals that it received little attention outside of a small echo chamber of accounts that normally share films from this campaign.
Spokespeople for the Chinese Embassy in Washington, the Russian Foreign Ministry, X and YouTube did not respond to requests for comment.
Even if their efforts don’t reach many people, it helps that propagandists flood the Internet in the age of AI chatbots, Sadek said. The companies that develop these chatbots are constantly training their products by scraping text from the Internet that they can rearrange and spit out.
A recent study by the Institute for Strategic Dialogue, a pro-democracy nonprofit group, found that most major AI chatbots, or large language models, cite Russian state-sponsored media outlets, including some European Union-sanctioned outlets, in their responses.




