Atrás

Chatbots Cite Russian State Media in Responses About Ukraine Conflict

Chatbots Cite Russian State Media in Responses About Ukraine Conflict
Wired

Background

The Institute of Strategic Dialogue (ISD) conducted a systematic test of four popular conversational AI systems—ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok—to see how they handle queries related to the conflict between Russia and Ukraine. The researchers posed a mix of neutral, leading, and deliberately malicious prompts in multiple European languages, aiming to uncover whether the bots would draw on sources that have been sanctioned by the European Union for spreading disinformation.

Key Findings

Across the suite of questions, the bots cited Russian‑state‑linked outlets such as Sputnik, RT, and other sites tied to Russian intelligence agencies. The analysis reported that roughly one-fifth of all responses included references to these sanctioned sources. The frequency of citations rose when the queries were more biased or malicious, indicating a pattern of confirmation bias within the models. Among the four systems, ChatGPT was noted for providing the highest number of references to Russian media, while Gemini displayed safety warnings more often and produced the overall best results in terms of limiting prohibited content.

Mechanisms of Influence

The study suggests that disinformation networks exploit “data voids”—areas where reliable information is scarce—by flooding the web with false narratives that AI systems can then retrieve. When users turn to chatbots for real‑time information, the models may draw from these low‑quality sources, unintentionally amplifying state‑backed propaganda. The researchers observed that the bots often linked to social‑media accounts and newer domains associated with Russian disinformation efforts, further demonstrating how the ecosystem can be weaponized.

Responses and Implications

OpenAI acknowledged that it takes steps to prevent the spread of false or misleading information, emphasizing ongoing improvements to its models and platform safeguards. Google’s representative did not comment, while DeepSeek also remained silent. The findings raise regulatory questions, especially as the European Union considers stricter rules for large online platforms that host user‑generated content. The ISD authors argue that beyond removal, there should be contextualization to help users understand the provenance and sanction status of cited sources.

Broader Context

Since the onset of the conflict, Russian authorities have intensified control over domestic media and expanded disinformation campaigns abroad. The integration of AI tools into everyday information seeking amplifies the stakes, as large language models become a primary reference point for many users. The study underscores the need for robust guardrails and transparent sourcing practices to safeguard the integrity of information delivered by AI chatbots.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: Wired

También disponible en: