Back

AI Image Generator Startup’s Database Exposes Millions of Non‑Consensual Nude Images

AI Image Generator Startup’s Database Exposes Millions of Non‑Consensual Nude Images
Wired AI

Background

Security researcher Jeremiah Fowler discovered that a cloud storage bucket used by an AI image‑generation startup was misconfigured, allowing anyone on the internet to access its contents. The bucket housed assets for the company’s consumer‑facing services MagicEdit and DreamPal, which are operated under the DreamX brand. Fowler’s investigation, first reported on the ExpressVPN blog, revealed the extent of the exposure and prompted immediate outreach to the company.

Scope of the Exposure

The publicly accessible bucket contained 1,099,985 records, the overwhelming majority of which were pornographic images or videos. The collection included "nudified" depictions of real individuals, face‑swapped content, and AI‑generated representations of minors. Fowler noted that at the time of discovery roughly 10,000 new images were being added each day, indicating an active pipeline of content generation. The data set spanned a range of styles, from anime‑style graphics to hyper‑realistic depictions that appeared to be based on actual people.

Company Responses

After being contacted, DreamX confirmed that it had closed public access to the bucket and initiated an internal investigation with external legal counsel. The firm also suspended access to its products pending the investigation’s outcome. Statements from DreamX emphasized a commitment to user safety, legal compliance and transparency, and highlighted existing moderation safeguards such as OpenAI’s Moderation API and automatic prompt filtering.

Related entities were also addressed. A spokesperson for SocialBook, an influencer‑marketing firm linked to the bucket, clarified that SocialBook does not operate or manage the exposed storage and is a separate legal entity. BoostInsider, the developer listed for the MagicEdit and DreamPal apps on the Apple App Store, was described as a defunct entity whose apps were removed as part of a broader restructuring and to strengthen content‑moderation frameworks.

Both MagicEdit and DreamPal were taken down from major app stores. Google confirmed that the apps were removed for policy violations related to sexually explicit content, while Apple indicated the apps had been withdrawn from its store.

Industry Implications

The breach underscores persistent risks associated with AI‑driven image generation, particularly the creation and distribution of non‑consensual sexual imagery and child sexual abuse material (CSAM). Advocacy groups such as EndTAB’s founder Adam Dodge warned that the incident reflects a broader pattern of startups prioritizing rapid growth over robust trust‑and‑safety measures. Fowler’s findings join earlier reports of misconfigured AI‑image databases that contained similarly abusive content.

Law enforcement and child‑protection organizations, including the National Center for Missing & Exploited Children, were notified, though they do not disclose details of specific tips received. The episode has prompted renewed calls for stricter oversight, mandatory content moderation, and clearer accountability mechanisms for companies deploying generative AI technologies.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Wired AI

Also available in: