Elon Musk’s Grok Still Generates Undressing Images Despite New Restrictions
Background
Since early 2024, Musk’s AI offerings—including the xAI‑built chatbot Grok—have faced scrutiny for enabling the creation of non‑consensual intimate imagery, explicit videos, and sexualized depictions of minors. Critics from the United States, Europe, Asia, and Australia have launched investigations or issued condemnations, citing concerns over privacy, consent, and child safety.
New Restrictions on X
In response, X announced a set of technological measures aimed at preventing Grok from editing images of real individuals in bikinis, underwear, or other revealing attire. The policy applies to all users on the X platform, regardless of whether they have free or paid subscriptions. Additionally, X introduced a geoblock that restricts image generation of real people in such clothing in jurisdictions where it is illegal. The company also reiterated its commitment to removing high‑priority violative content, including child sexual‑abuse material and non‑consensual nudity.
Findings from Researchers
Despite the announced safeguards, multiple independent investigations have documented that Grok’s standalone services remain capable of producing the prohibited content. Paul Bouchaud, lead researcher at the nonprofit AI Forensics, confirmed that the Grok website and mobile app can still generate photorealistic nudity of real people, a capability that appears to be blocked only on the X integration. Tests conducted by WIRED, The Verge, and Bellingcat in the United Kingdom and the United States successfully removed clothing from images of both men and women without apparent restrictions.
Researchers noted that the Grok app in the UK prompted users for a year of birth before generating an image of a male subject, but the request could still be fulfilled. Other users reported mixed results on a pornography forum, with some prompts yielding explicit images after several attempts, while others encountered stricter moderation that prevented any removal of clothing.
Continued Issues on Grok Platform
The persistence of these capabilities underscores a gap between the moderation applied on X and the broader ecosystem of Grok services. While verified accounts on X appear to have lost the ability to generate bikini images of women, the same functionality remains accessible on the Grok website and app. This discrepancy has led to criticism from women’s advocacy groups, who describe the situation as a “monetization of abuse.”
Responses from Musk and xAI
Musk has publicly stated that Grok, when NSFW mode is enabled, is intended to allow upper‑body nudity of imaginary adult humans, aligning with standards seen in R‑rated movies on platforms like Apple TV. He emphasized that content allowances may vary by country based on local laws. Spokespeople for xAI have not provided comment on the recent tests, while an X spokesperson confirmed that the geoblock applies to both the app and website.
Implications and Outlook
The ongoing ability of Grok’s non‑X services to generate non‑consensual and sexualized imagery raises important questions about the effectiveness of platform‑specific moderation versus ecosystem‑wide safeguards. Regulators and advocacy groups continue to monitor the situation, urging more comprehensive controls to prevent abuse. As AI image generators become increasingly sophisticated, the challenge of balancing creative use with ethical safeguards remains a focal point for technology companies, policymakers, and civil society.
Used: News Factory APP - news discovery and automation - ChatGPT for Business