Atrás

Guardian Report Questions Credibility of OpenAI's GPT-5.2 Model Over Source Citations

Guardian Report Questions Credibility of OpenAI's GPT-5.2 Model Over Source Citations
Engadget

Background

OpenAI described its GPT‑5.2 model as the most advanced frontier model for professional work. The company positioned the system to handle complex tasks such as spreadsheet creation and other professional applications.

Guardian Findings

The Guardian conducted tests that called the model’s credibility into question. According to the report, GPT‑5.2 cited Grokipedia, an online encyclopedia powered by xAI, when answering prompts about controversial subjects related to Iran and the Holocaust. Specific examples included claims that the Iranian government was linked to the telecommunications company MTN‑Irancell and references to Richard Evans, a British historian who served as an expert witness in a libel trial involving Holocaust denier David Irving.

The investigation also observed that GPT‑5.2 did not rely on Grokipedia for a prompt about media bias against Donald Trump and other contentious topics, indicating inconsistent source usage.

Model Release and Controversy

OpenAI released GPT‑5.2 in December, emphasizing its enhanced performance for professional use. Grokipedia, which existed before the model’s launch, had already attracted scrutiny for citing neo‑Nazi forums. A study by U.S. researchers further reported that the AI‑generated encyclopedia referenced sources described as “questionable” and “problematic.”

OpenAI Response

In response to the Guardian’s report, OpenAI stated that GPT‑5.2 searches the web for a broad range of publicly available sources and viewpoints. The company added that safety filters are applied to reduce the risk of surfacing links associated with high‑severity harms.

Implications

The findings highlight ongoing challenges in ensuring the reliability of large language models, especially when they draw from third‑party AI‑generated content. The discrepancy in source selection raises questions about transparency and the effectiveness of safety mechanisms designed to filter harmful or unreliable information.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: Engadget

También disponible en: