Back

Baltimore Sues xAI Over Grok Deepfake Harms

Background

Elon Musk’s artificial‑intelligence company xAI launched the chatbot Grok, promoting it as an all‑purpose AI assistant. Alongside the chatbot, Grok offers an image‑generation feature that has drawn intense scrutiny. According to the Center for Countering Digital Hate, the tool was used to produce an estimated three million sexualized images over a period of eleven days. Among those images, roughly twenty‑three thousand depicted minors, raising concerns about the creation of child sexual abuse material.

Legal Action

In response, the city of Baltimore filed a municipal lawsuit against xAI, asserting that the company violated the city’s Consumer Protection Ordinance. The complaint, reported by The Guardian, claims that xAI marketed Grok without disclosing the potential risks and harms associated with its use, particularly the generation of non‑consensual and illegal images. City Solicitor Ebony M. Thompson emphasized that Baltimore’s consumer‑protection laws exist to safeguard residents from emerging technological harms. She stated that when companies introduce powerful technologies without adequate guardrails, the city has both the authority and the obligation to act.

Broader Context

The lawsuit adds to a growing wave of regulatory and legal challenges targeting AI image‑generation tools. Regulators worldwide have limited access to or launched investigations into platforms that enable potentially illegal and non‑consensual image creation. In the United States, a separate potential class action was filed by three teenagers who alleged that photos of them were used to create child sexual abuse material. While the federal government has not yet taken direct action against xAI, Baltimore’s municipal suit represents a novel approach by leveraging local consumer‑protection statutes to address AI‑related risks.

Implications

City officials argue that the legal action is intended to protect residents, hold technology firms accountable, and prevent the entrenchment of harms as AI technology continues to evolve. The case may set a precedent for how municipalities can use consumer‑protection laws to regulate emerging AI applications, especially those capable of generating harmful visual content.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget