Back

Teen Girls File Class-Action Suit Against xAI Over Grok-Generated Child Sexual Abuse Images

Background

Three teenage girls, identified as Jane Doe 1, Jane Doe 2 and Jane Doe 3, and their legal guardians have filed a class‑action lawsuit against xAI, the artificial‑intelligence company founded by Elon Musk. The suit alleges that xAI’s Grok AI tool was used to create and share child sexual abuse material (CSAM) that featured the plaintiffs’ faces and likenesses.

Allegations

The complaint states that from December to early January, Grok allowed users on the X platform to generate AI‑produced nonconsensual intimate images, often referred to as deepfake porn. Reports cited in the filing estimate that users produced 4.4 million “undressed” or “nudified” images, representing 41 % of all images created during a nine‑day window. The plaintiffs claim the images were disseminated through Discord servers and other online venues, causing severe loss of privacy, dignity, and personal safety.

The lawsuit contends that xAI did not employ industry‑standard guardrails to prevent the creation of such content. It further alleges that xAI licensed its technology to third‑party companies abroad, which sold subscriptions that enabled abusers to generate CSAM using the company’s servers, thereby making xAI liable for the resulting harm.

Impact on Victims

According to the filing, Jane Doe 1 first learned that AI‑generated abusive material featuring her was circulating after receiving an anonymous Instagram message in early December. The message directed her to a Discord server where the content was shared. Law‑enforcement investigation subsequently led to the arrest of one perpetrator. Ongoing investigations have revealed that the other two plaintiffs’ images were also transformed into abusive material using Grok.

Reactions and Wider Context

The wave of AI‑generated “undressed” images sparked outrage worldwide. The European Commission launched an investigation, while Malaysia and Indonesia imposed bans on the X platform. Some U.S. officials called on Apple and Google to remove the X app from their stores, citing policy violations, though no federal investigation into X or xAI has been opened.

A separate class‑action lawsuit was filed by a South Carolina woman in late January, indicating a growing pattern of legal challenges against AI‑generated CSAM. The current complaint compares Grok’s “spicy AI” generation to “dark arts,” emphasizing the ease with which children can be depicted in any pose, no matter how illicit.

Legal and Ethical Implications

The lawsuit underscores the need for robust safeguards in AI image‑generation tools and raises questions about the responsibility of AI developers for content produced through their platforms. Plaintiffs seek accountability for the alleged privacy violations and the lasting impact of having their identities attached to AI‑generated child sexual abuse videos.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: CNET

Also available in: