Chatbots Are Pushing Sanctioned Russian Propaganda

ChatGPT from OpenAI, that of Google xAI’s Gemini, DeepSeek and Grok push Russian state propaganda from sanctioned entities – including quotes from Russian state media, sites linked to Russian intelligence or pro-Kremlin narratives – when asked about the war against Ukraine, according to a new report.
Researchers at the Institute for Strategic Dialogue (ISD) say Russian propaganda has targeted and exploited data gaps – where real-time data searches provide few results from legitimate sources – to promote false and misleading information. Nearly a fifth of responses to questions about Russia’s war in Ukraine among the four chatbots tested cited sources attributed to the Russian state, the ISD study found.
“This raises questions about how chatbots should react when referencing these sources, given that many of them are sanctioned by the EU,” says Pablo Maristany de las Casas, an ISD analyst who led the study. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, says the ISD. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union, according to OpenAI data.
The researchers asked the chatbots 300 neutral, biased and “malicious” questions regarding perceptions of NATO, peace talks, Ukraine’s military recruitment of Ukrainian refugees and war crimes committed during the Russian invasion of Ukraine. The researchers used separate accounts for each query in English, Spanish, French, German and Italian in an experiment conducted in July. The same propaganda problems are still present in October, says Maristany de las Casas.
As part of widespread sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media outlets for spreading disinformation and distorting facts as part of their “destabilization strategy” of Europe and other countries.
The ISD study says chatbots cited Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation and the R-FBI. Some chatbots also cited Russian disinformation networks and Russian journalists or influencers who amplified Kremlin narratives, the study found. Similar previous research also found that 10 of the most popular chatbots imitate Russian narratives.
OpenAI spokesperson Kate Waters told WIRED in a statement that the company is taking steps “to prevent people from using ChatGPT to spread false or misleading information, including content linked to state-sponsored actors,” adding that these are long-standing issues that the company is trying to address by improving its model and platforms.



