Voltar

OpenAI’s Sora App Raises Concerns Over Deepfake Proliferation

OpenAI’s Sora App Raises Concerns Over Deepfake Proliferation
CNET

AI‑Generated Videos Go Mainstream with Sora

OpenAI has launched Sora, an iOS application that turns any user into a creator of AI‑generated videos. The app presents a TikTok‑like feed where every clip is produced by the model, making it a “social media app” built entirely on synthetic media.

Built‑In Watermark and Metadata

Every video exported from Sora carries a distinctive white cloud logo that bounces around the edges of the clip. This moving watermark mirrors the approach used by platforms such as TikTok. In addition, the videos embed content credentials from the Coalition for Content Provenance and Authenticity (C2PA). The metadata records that the file was “issued by OpenAI” and flags it as AI‑generated.

Verification Tools

The Content Authenticity Initiative offers a free verification tool that reads the embedded metadata. Users can upload a video to the service and see a panel confirming its origin, creation date, and AI status. While the tool reliably flags unaltered Sora videos, it may miss content that has been re‑encoded or stripped of its watermark.

Platform Labels and Creator Disclosure

Meta’s family of apps, as well as TikTok and YouTube, have begun labeling posts that appear to be AI‑generated. These internal systems are not perfect, and the most reliable indication remains a clear disclosure from the creator. Some platforms now allow users to add an “AI‑generated” tag in captions, helping audiences understand the source.

Industry Concerns

Experts highlight the risk that Sora’s low‑skill, low‑cost workflow could accelerate the spread of deepfakes and misinformation. Public figures and celebrities are especially vulnerable, prompting unions such as SAG‑AFTRA to urge OpenAI to strengthen safeguards. OpenAI’s CEO Sam Altman has acknowledged that society will need to adapt to a world where anyone can create realistic fake videos.

What Users Can Do

Consumers are advised to stay vigilant: look for the moving watermark, check metadata with verification tools, and be skeptical of content that feels “off.” While no single method guarantees detection, combining visual cues, metadata checks, and platform labels provides the best chance of spotting synthetic media.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: CNET

Também disponível em: