OpenAI’s video‑generation model Sora 2 has been weaponized to produce realistic yet artificial videos that depict children in questionable scenarios. These clips, many of which mimic commercial advertisements, have spread on TikTok and other platforms, prompting concerns about the ease of circumventing existing safeguards. While OpenAI asserts strict policies against child exploitation, the rapid emergence of such content highlights gaps in moderation and the need for more robust safeguards. Industry observers, child‑protection groups, and policymakers are calling for stronger design‑by‑default protections to prevent misuse of AI‑generated media.
Leer más →