| --- |
| base_model: |
| - Wan-AI/Wan2.1-I2V-14B-720P-Diffusers |
| library_name: diffusers |
| license: apache-2.0 |
| pipeline_tag: image-to-video |
| datasets: |
| - Video-Reason/VBVR-Dataset |
| --- |
| # VBVR: A Very Big Video Reasoning Suite |
|
|
| <a href="https://video-reason.com" target="_blank"> |
| <img alt="Project Page" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" /> |
| </a> |
| <a href="https://github.com/Video-Reason/VBVR-EvalKit" target="_blank"> |
| <img alt="Code" src="https://img.shields.io/badge/Evaluation_code-VBVR_Bench-100000?style=flat-square&logo=github&logoColor=white" height="20" /> |
| </a> |
| <a href="https://github.com/Video-Reason/VBVR-Wan2.2" target="_blank"> |
| <img alt="Code" src="https://img.shields.io/badge/Training_code-VBVR_Wan2.2-100000?style=flat-square&logo=github&logoColor=white" height="20" /> |
| </a> |
| <a href="https://github.com/Video-Reason/VBVR-DataFactory" target="_blank"> |
| <img alt="Code" src="https://img.shields.io/badge/Data_code-VBVR_DataFactory-100000?style=flat-square&logo=github&logoColor=white" height="20" /> |
| </a> |
| <a href="https://huggingface.co/papers/2602.20159" target="_blank"> |
| <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" /> |
| </a> |
| <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank"> |
| <img alt="Dataset" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" /> |
| </a> |
| <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data" target="_blank"> |
| <img alt="Bench Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Data-ffc107?color=ffc107&logoColor=white" height="20" /> |
| </a> |
| <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank"> |
| <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" /> |
| </a> |
| |
| ## Overview |
| Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, |
| enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. |
| Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. |
|
|
| To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks |
| and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, |
| a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, |
| enabling reproducible and interpretable diagnosis of video reasoning capabilities. |
|
|
| Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization |
| to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.** |
|
|
| The model was presented in the paper [A Very Big Video Reasoning Suite](https://huggingface.co/papers/2602.20159). |
|
|
| ## Models Zoo |
|
|
| | Model | Base Architecture | Other Remarks | |
| |-------|-------------------|---------------| |
| | [VBVR-Wan2.1](https://huggingface.co/Video-Reason/VBVR-Wan2.1) | Wan2.1-I2V-14B-720P | Diffusers format | |
| | [VBVR-Wan2.2](https://huggingface.co/Video-Reason/VBVR-Wan2.2) | Wan2.2-I2V-A14B | Diffusers format | |
| | [**VBVR-Wan2.1-diffsynth**](https://huggingface.co/Video-Reason/VBVR-Wan2.1-diffsynth) | Wan2.1-I2V-14B-720P | DiffSynth LoRA format | |
| | [VBVR-Wan2.2-diffsynth](https://huggingface.co/Video-Reason/VBVR-Wan2.2-diffsynth) | Wan2.2-I2V-A14B | DiffSynth LoRA format | |
| | [VBVR-LTX2.3-diffsynth](https://huggingface.co/Video-Reason/VBVR-LTX2.3-diffsynth) | LTX-Video-2.3 | DiffSynth LoRA format | |
|
|
| ## Release Information |
| VBVR-Wan2.1 is trained from Wan2.1-I2V-14B-720P without architectural modifications, as the goal of VBVR is to *investigate data scaling behavior* and provide *strong baseline models* for the video reasoning research community. Leveraging the VBVR-Dataset, which constitutes one of the largest video reasoning datasets to date, the VBVR model family achieved highest scores on VBVR-Bench. |
|
|
| In this release, we present |
| [**VBVR-Wan2.1**](https://huggingface.co/Video-Reason/VBVR-Wan2.1) (Diffusers format), |
| [**VBVR-Wan2.1-diffsynth**](https://huggingface.co/Video-Reason/VBVR-Wan2.1-diffsynth) (DiffSynth LoRA format), and |
| [**VBVR-LTX2.3-diffsynth**](https://huggingface.co/Video-Reason/VBVR-LTX2.3-diffsynth) (DiffSynth LoRA format; Diffusers does not yet support LTX-Video-2.3, so only the DiffSynth LoRA format is released for this model). |
|
|
| <table> |
| <tr> |
| <th>Model</th> |
| <th>Overall</th> |
| <th>ID</th> |
| <th>ID-Abst.</th> |
| <th>ID-Know.</th> |
| <th>ID-Perc.</th> |
| <th>ID-Spat.</th> |
| <th>ID-Trans.</th> |
| <th>OOD</th> |
| <th>OOD-Abst.</th> |
| <th>OOD-Know.</th> |
| <th>OOD-Perc.</th> |
| <th>OOD-Spat.</th> |
| <th>OOD-Trans.</th> |
| </tr> |
| <tbody> |
| <tr> |
| <td><strong>Human</strong></td> |
| <td>0.974</td><td>0.960</td><td>0.919</td><td>0.956</td><td>1.00</td><td>0.95</td><td>1.00</td> |
| <td>0.988</td><td>1.00</td><td>1.00</td><td>0.990</td><td>1.00</td><td>0.970</td> |
| </tr> |
| <tr style="background:#F2F0EF;font-weight:700;text-align:center;"> |
| <td colspan="14"><em>Open-source Models</em></td> |
| </tr> |
| <tr> |
| <td>CogVideoX1.5-5B-I2V</td> |
| <td>0.273</td><td>0.283</td><td>0.241</td><td>0.328</td><td>0.257</td><td>0.328</td><td>0.305</td> |
| <td>0.262</td><td><u>0.281</u></td><td>0.235</td><td>0.250</td><td><strong>0.254</strong></td><td>0.282</td> |
| </tr> |
| <tr> |
| <td>HunyuanVideo-I2V</td> |
| <td>0.273</td><td>0.280</td><td>0.207</td><td>0.357</td><td>0.293</td><td>0.280</td><td><u>0.316</u></td> |
| <td>0.265</td><td>0.175</td><td><strong>0.369</strong></td><td>0.290</td><td><u>0.253</u></td><td>0.250</td> |
| </tr> |
| <tr> |
| <td><strong>Wan2.2-I2V-A14B</strong></td> |
| <td><strong>0.371</strong></td><td><strong>0.412</strong></td><td><strong>0.430</strong></td> |
| <td><strong>0.382</strong></td><td><strong>0.415</strong></td><td><strong>0.404</strong></td> |
| <td><strong>0.419</strong></td><td><strong>0.329</strong></td> |
| <td><strong>0.405</strong></td><td>0.308</td><td><strong>0.343</strong></td> |
| <td>0.236</td><td><u>0.307</u></td> |
| </tr> |
| <tr> |
| <td><u>LTX-2</u></td> |
| <td><u>0.313</u></td><td><u>0.329</u></td><td><u>0.316</u></td> |
| <td><u>0.362</u></td><td><u>0.326</u></td><td><u>0.340</u></td> |
| <td>0.306</td><td><u>0.297</u></td> |
| <td>0.244</td><td><u>0.337</u></td><td><u>0.317</u></td> |
| <td>0.231</td><td><strong>0.311</strong></td> |
| </tr> |
| <tr style="background:#F2F0EF;font-weight:700;text-align:center;"> |
| <td colspan="14"><em>Proprietary Models</em></td> |
| </tr> |
| <tr> |
| <td><u>Seedance 2.0</u></td> |
| <td><u>0.544</u></td><td><strong>0.570</strong></td><td>0.593</td><td><u>0.498</u></td><td><strong>0.618</strong></td><td><u>0.514</u></td><td><strong>0.602</strong></td> |
| <td><u>0.517</u></td><td><strong>0.643</strong></td><td>0.398</td><td><u>0.492</u></td><td>0.427</td><td><strong>0.556</strong></td> |
| </tr> |
| <tr> |
| <td>Runway Gen-4 Turbo</td> |
| <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td> |
| <td>0.414</td><td>0.515</td><td><u>0.429</u></td><td>0.419</td><td>0.327</td><td>0.373</td> |
| </tr> |
| <tr> |
| <td><strong>Sora 2</strong></td> |
| <td><strong>0.546</strong></td><td><u>0.569</u></td><td><u>0.602</u></td> |
| <td>0.477</td><td><u>0.581</u></td><td><strong>0.572</strong></td> |
| <td><u>0.597</u></td><td><strong>0.523</strong></td> |
| <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td> |
| <td><strong>0.462</strong></td><td><u>0.546</u></td> |
| </tr> |
| <tr> |
| <td>Kling 2.6</td> |
| <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td>0.519</td> |
| <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td> |
| </tr> |
| <tr> |
| <td>Veo 3.1</td> |
| <td>0.480</td><td>0.531</td><td><strong>0.611</strong></td> |
| <td><strong>0.503</strong></td><td>0.520</td><td>0.444</td> |
| <td>0.510</td><td>0.429</td> |
| <td><u>0.577</u></td><td>0.277</td><td>0.420</td> |
| <td><u>0.441</u></td><td>0.404</td> |
| </tr> |
| <tr style="background:#F2F0EF;font-weight:700;text-align:center;"> |
| <td colspan="14"><em>Data Scaling Strong Baseline</em></td> |
| </tr> |
| <tr> |
| <td><strong>VBVR-LTX2.3</strong></td> |
| <td>0.516</td><td>0.580</td><td>0.608</td><td>0.631</td><td>0.529</td><td>0.454</td><td>0.680</td> |
| <td>0.453</td><td>0.608</td><td>0.577</td><td><u>0.409</u></td><td>0.414</td><td><u>0.388</u></td> |
| </tr> |
| <tr> |
| <td><strong>VBVR-Wan2.1</strong></td> |
| <td><u>0.592</u></td><td><u>0.724</u></td><td><u>0.705</u></td><td><u>0.710</u></td><td><u>0.727</u></td><td><u>0.719</u></td><td><u>0.784</u></td> |
| <td><u>0.461</u></td><td><u>0.674</u></td><td><strong>0.592</strong></td><td>0.387</td><td><u>0.461</u></td><td>0.387</td> |
| </tr> |
| <tr> |
| <td><strong>VBVR-Wan2.2</strong></td> |
| <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td> |
| <td><strong>0.750</strong></td><td><strong>0.782</strong></td><td><strong>0.745</strong></td> |
| <td><strong>0.833</strong></td><td><strong>0.610</strong></td> |
| <td><strong>0.768</strong></td><td><u>0.572</u></td><td><strong>0.547</strong></td> |
| <td><strong>0.618</strong></td><td><strong>0.615</strong></td> |
| </tr> |
| </tbody> |
| </table> |
| |
| ## QuickStart |
|
|
| ### Inference |
| For running inference, please refer to the [**official guide**](https://github.com/Video-Reason/VBVR-Wan2.2?tab=readme-ov-file#wan21-inference) in the VBVR-Wan2.2 GitHub repository. |
| This repository contains the latest instructions, configurations, and examples for performing inference with the VBVR family models. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @article{vbvr2026, |
| title = {A Very Big Video Reasoning Suite}, |
| author = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and |
| Wiedemer, Thadd{\"a}us and Gao, Qingying and Luo, Dezhi and |
| Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and |
| Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and |
| Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and |
| Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and |
| Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and |
| Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and |
| Xu, Yile and Xu, Hua bureau and Blacutt, Kenton and Nguyen, Tin and |
| Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and |
| Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and |
| Milli{\`e}re, Rapha{\"e}l and Shi, Freda and Vasconcelos, Nuno and |
| Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and |
| Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and |
| Yang, Lei and Cai, Zhongang and Deng, Hokin}, |
| journal = {arXiv preprint arXiv:2602.20159}, |
| year = {2026}, |
| url = {https://arxiv.org/abs/2602.20159} |
| } |
| ``` |