Atrás

YouTube Rolls Out AI Likeness Detection Tool for Creators

YouTube Rolls Out AI Likeness Detection Tool for Creators
The Verge

New AI Likeness Detection Feature

YouTube announced that creators who are part of its Partner Program now have access to an early‑stage AI detection system designed to identify videos that feature their face or likeness without authorization. The feature is intended to help high‑profile individuals manage the growing amount of synthetic media that appears on the platform.

How the Tool Works

Eligible creators must first verify their identity through a process outlined by YouTube. Once verified, they can navigate to a dedicated Content Detection tab inside YouTube Studio where the system surfaces videos that potentially contain unauthorized AI‑generated content. Creators can review each flagged video and, if they determine it is not an authorized use of their likeness, they can submit a request for the video to be removed.

The system operates in a manner akin to YouTube’s existing Content ID technology, which matches copyrighted audio and video against a database of reference files. However, this new tool focuses on visual likeness rather than copyrighted material, and YouTube cautions that, while still in development, the system may sometimes display videos that feature the creator’s actual face rather than a synthetic version.

Pilot Program and Expansion

The detection tool was first tested in a pilot program that began in December, involving talent represented by Creative Artists Agency. YouTube’s blog at the time described the collaboration as giving several of the world’s most influential figures early access to technology that can identify and manage AI‑generated content featuring their likeness at scale. Following the pilot, the first wave of eligible creators received email notifications about the new feature, and YouTube plans to roll it out to additional creators over the next few months.

Broader AI Policy Measures

This rollout is part of a larger set of initiatives aimed at addressing AI‑generated media on the platform. In March, YouTube introduced a requirement for creators to label uploads that contain AI‑generated or AI‑altered content. At the same time, the company announced a strict policy governing AI‑generated music that mimics an artist’s unique singing or rapping voice. Together, these policies and tools reflect YouTube’s effort to give creators more control over how their likeness and creative output are used in the age of synthetic media.

Implications for Creators and the Platform

By providing a systematic way to detect unauthorized deepfake or AI‑generated videos, YouTube aims to reduce the risk of misinformation, impersonation, and potential reputational harm for high‑profile creators. The tool also signals to the broader creator community that the platform is taking proactive steps to address the challenges posed by rapidly advancing AI video generation technologies.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: The Verge

También disponible en: