ByteDance Unveils Seedance 2.0, Multimodal AI Video Generator
ByteDance Introduces Seedance 2.0
ByteDance, the company behind TikTok, released a new AI model named Seedance 2.0. In a blog post the firm described the model as a substantial leap in generation quality, capable of handling prompts that blend text, images, video, and audio. Users can refine a single request with up to nine images, three video clips, and three audio clips, allowing the system to synthesize complex scenes with multiple subjects.
Video Creation Capabilities
Seedance 2.0 can generate video clips up to 15 seconds long, complete with audio. The model accounts for camera movement, visual effects, and motion, and can follow text‑based storyboards. In a showcase, the AI reproduced a figure‑skating routine that featured synchronized takeoffs, mid‑air spins, and precise ice landings while adhering to real‑world physics.
Public Demonstrations
Social‑media users have already posted examples. One video combined the likenesses of Brad Pitt and Tom Cruise in a cinematic fight, prompting a comment from writer Rhett Reese. Other clips displayed anime‑style animation, cartoon sequences, sci‑fi scenes, and content that appears as if created by a human creator. Some demonstrations included characters from popular franchises, highlighting the model’s ability to generate recognizable styles.
Availability and Outlook
For now, Seedance 2.0 is accessible through ByteDance’s Dreamina AI platform and its AI assistant Doubao. It is unclear whether the technology will be integrated into TikTok, especially given recent changes in the app’s U.S. ownership. The rollout marks another step in the rapid advancement of AI‑driven video generation, joining efforts from Google, OpenAI, Runway, and other industry players.
Used: News Factory APP - news discovery and automation - ChatGPT for Business