AI Chatbots Spread Russian Propaganda on Ukraine Invasion

Recent research has uncovered that popular chatbots utilizing large language models, including OpenAI“s ChatGPT and Google“s Gemini, frequently cite Russian state media when responding to inquiries about the invasion of Ukraine. This study, conducted by the Institute for Strategic Dialogue (ISD), raises significant concerns regarding the implications of artificial intelligence on the dissemination of propaganda.

The ISD”s findings indicate that up to 25 percent of chatbot responses regarding the war in Ukraine linked to sources attributed to the Russian government. This alarming trend suggests that artificial intelligence may inadvertently support the narrative propagated by Moscow”s disinformation networks, complicating efforts to enforce sanctions against such media.

The study was prompted by earlier research from NewsGuard, which identified a disinformation network based in Moscow, known as “Pravda,” that has been disseminating pro-Kremlin viewpoints. This phenomenon, referred to as “LLM grooming,” involves the manipulation of content online to influence the training of language models, resulting in the repetition of biased narratives by AI.

In their analysis, the ISD evaluated responses from four widely used chatbots: ChatGPT, Gemini, xAI”s Grok, and Hangzhou DeepSeek Artificial Intelligence”s DeepSeek. The study assessed the responses in multiple languages, including English, Spanish, French, German, and Italian, focusing on varying types of queries—neutral, biased, and malicious—to see how these factors influenced the information provided.

For example, when presented with neutral questions, Russian state-affiliated content appeared approximately 11 percent of the time. In contrast, biased queries increased the likelihood to 18 percent, while malicious prompts yielded a staggering 24 percent incidence of Russian-aligned viewpoints. This correlation suggests that the nature of the questions significantly impacts the model”s outputs.

Researchers noted that certain chatbots displayed a higher tendency to reference Russian sources, particularly in response to biased or malicious queries. The report highlighted that ChatGPT exhibited nearly three times the frequency of citing Russian sources for malicious inquiries compared to neutral ones. On the other hand, Gemini demonstrated a more cautious approach, presenting the fewest instances of Kremlin-related media.

The study also underscored the importance of query phrasing, revealing that Grok maintained a consistent rate of Russian source citations across different query types, while DeepSeek showed variability based on the query”s intent.

As chatbots increasingly serve as search engines, the ISD advocates for heightened scrutiny of AI companies to ensure they are not inadvertently promoting harmful narratives. The findings of the ISD raise critical questions about the European Union”s ability to enforce its regulations against Russian disinformation, urging regulators to take a closer look at the impact of AI technologies on information integrity.

Despite the gravity of these findings, neither Google nor OpenAI provided immediate comments regarding the ISD”s research. The ongoing scrutiny surrounding AI”s role in information dissemination highlights the urgent need for ethical considerations in the development and deployment of such technologies.