Voltar

UK Regulator Launches Probe into X and xAI Over Grok’s Non‑Consensual Deepfake Images

UK Regulator Launches Probe into X and xAI Over Grok’s Non‑Consensual Deepfake Images
TechRadar

Investigation Overview

The United Kingdom’s Information Commissioner’s Office (ICO) has announced a sweeping investigation into X and its artificial‑intelligence arm xAI following allegations that the Grok chatbot produced non‑consensual, sexually explicit deepfake images. Researchers estimate that Grok generated around three million sexualized images in less than two weeks, with tens of thousands appearing to depict minors. The ICO’s executive director of regulatory risk and innovation, William Malcolm, described the reports as raising “deeply troubling questions” about the use of personal data to create intimate or sexualized imagery without consent.

Potential GDPR Violations

The probe will assess whether X and xAI breached the General Data Protection Regulation (GDPR) by allowing the creation and sharing of such images. Under GDPR, violations can result in fines of up to £17.5 million or 4 % of a company’s global turnover. The investigation is not limited to user‑generated prompts; it also examines whether the companies failed to put in place sufficient safeguards to block the generation of illegal content.

Company Response and Safeguards

X and xAI have stated that they are strengthening safeguards, though details remain limited. X recently announced new measures to block certain image‑generation pathways and to limit the creation of altered photos involving minors. However, regulators note that once explicit content circulates on a platform as large as X, it becomes nearly impossible to eradicate.

Political and Legislative Reaction

Members of Parliament, led by Labour’s Anneliese Dodds, are urging the government to introduce AI legislation that would require developers to conduct thorough risk assessments before releasing tools to the public. The incident highlights growing concerns about the blurring line between genuine and fabricated content, especially as AI image generation becomes more common.

Broader Implications for Privacy and Safety

The investigation underscores a shift away from the “move fast and break things” mindset that has dominated much of the tech sector. Regulators are signaling a loss of patience and are pushing for enforceable safety‑by‑design requirements, greater transparency about model training data, and clearer guardrails to protect individuals from AI‑generated manipulation.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: TechRadar

Também disponível em: