Papers
arxiv:2604.23789

MuSS: A Large-Scale Dataset and Cinematic Narrative Benchmark for Multi-Shot Subject-to-Video Generation

Published on May 9
· Submitted by
Zhanghaojie
on May 12
Authors:
,
,
,
,
,
,
,

Abstract

MuSS is a large-scale dual-track dataset designed for multi-shot video generation that addresses narrative logic, spatiotemporal alignment, and copy-paste issues in subject-to-video generation through a progressive captioning pipeline and cross-shot matching mechanism.

AI-generated summary

While video foundation models excel at single-shot generation, real-world cinematic storytelling inherently relies on complex multi-shot sequencing. Further progress is constrained by the absence of datasets that address three core challenges: authentic narrative logic, spatiotemporal text-video alignment conflicts, and the "copy-paste" dilemma prevalent in Subject-to-Video (S2V) generation. To bridge this gap, we introduce MuSS, a large-scale, dual-track dataset tailored for multi-shot video and S2V generation. Sourced from over 3,000 movies, MuSS explicitly supports both complex montage transitions and subject-centric narratives. To construct this dataset, we pioneer a progressive captioning pipeline that eliminates contextual conflicts by ensuring local shot-level accuracy before enforcing global narrative coherence. Crucially, we implement a cross-shot matching mechanism to fundamentally eradicate the S2V copy-paste shortcut. Alongside the dataset, we propose the Cinematic Narrative Benchmark, featuring a visual-logic-driven paradigm and a novel Anti-Copy-Paste Variance (ACP-Var) metric to rigorously assess continuous storytelling and 3D structural consistency. Extensive experiments demonstrate that while current baselines struggle with continuous narrative logic or degenerate into trivial 2D sticker generators, our MuSS-augmented model achieves state-of-the-art narrative effectiveness and cross-shot identity preservation.

Community

TL;DR: Current video generation models often struggle with continuous narrative logic or degenerate into trivial "2D sticker" copy-paste generators. This paper introduces MuSS, a large-scale dataset and benchmark designed specifically for multi-shot Subject-to-Video (S2V) generation, solving contextual conflicts and ensuring true 3D structural consistency.

🌟 Key Highlights:

  • Massive Cinematic Dataset: Sourced from over 3,000 movies, explicitly built to support both complex montage transitions and subject-centric continuous narratives.

  • Progressive Captioning Pipeline: Pioneers a unique “single-shot first, multi-shot second” annotation approach that effectively eliminates spatiotemporal contextual conflicts.

  • Eradicating the "Copy-Paste" Shortcut: Implements a cross-shot matching mechanism to force models to learn real identity preservation rather than taking the trivial 2D shortcut.

  • New Evaluation Paradigm: Introduces the Cinematic Narrative Benchmark with a novel Anti-Copy-Paste Variance (ACP-Var) metric to rigorously assess true continuous storytelling capabilities.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.23789
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.23789 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.23789 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.23789 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.