Atrás

OpenAI Tightens Safeguards for AI-Generated Likeness After Actor Concerns

OpenAI Tightens Safeguards for AI-Generated Likeness After Actor Concerns
The Verge

Industry Pushback Over AI‑Generated Likeness

Actors, studios, agents and the Screen Actors Guild‑American Federation of Television and Radio Artists (SAG‑AFTRA) have expressed alarm about the use of real‑person likenesses in videos generated by OpenAI’s Sora 2 tool. The controversy intensified after videos featuring actor Bryan Cranston surfaced, including one that depicted him taking a selfie with Michael Jackson.

Joint Statement Announces Strengthened Guardrails

In response, OpenAI, Cranston, SAG‑AFTRA and several talent agencies issued a joint statement. The statement confirmed that OpenAI has "strengthened guardrails" around its opt‑in policy for likeness and voice. OpenAI also "expressed regret for these unintentional generations" and promised to "expeditiously" review complaints about policy breaches.

Commitment to Artist Control

The company reiterated that all artists, performers and individuals will have the right to determine how and whether they can be simulated. OpenAI indicated it would provide more granular control for rightsholders, moving away from the earlier opt‑out approach toward an opt‑in model with additional safeguards.

Support From Talent Agencies

The statement carried co‑signs from United Talent Agency, the Association of Talent Agents and Creative Artists Agency. These agencies had previously criticized OpenAI’s lack of protections for artists, and now endorse the newly announced safeguards.

Cranston’s Response and Legislative Calls

Bryan Cranston expressed gratitude for OpenAI’s policy changes, noting he is "grateful to OpenAI for its policy and for improving its guardrails." SAG‑AFTRA president Sean Astin added that performers need legal protection against "massive misappropriation by replication technology" and highlighted the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe Act, also known as the NO FAKES Act.

Future Direction for Sora

OpenAI originally launched Sora 2 with an opt‑out policy for copyright holders, but reversed course after public outcry and controversial videos, such as a Nazi‑themed SpongeBob. The company now promises to give rightsholders more granular control, aligning with the opt‑in model for likeness while adding further controls.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: The Verge

También disponible en: