# Seeing the Scene Matters: a Scene-Aware Long-Video Benchmark
## 👀 Overview Long-video understanding remains challenging for multimodal large language models because real videos are not merely long sequences of frames, but are organized into semantically coherent scenes. Existing video benchmarks often emphasize short-clip perception or sparse frame matching, which makes it difficult to evaluate whether a model can understand scene-level events, connect multimodal cues across multi-minute videos, and reason over temporally distributed evidence. We introduce **SceneBench**, a scene-aware long-video benchmark designed to evaluate long-video understanding at the scene level. SceneBench focuses on multi-minute videos and diverse task formats, including **Title Prediction**, **Comment Prediction**, **ClipQA**, **SceneQA**, **SceneQA-Audio**, and **I-VQA**. After ambiguity filtering and quality control, the benchmark contains **8,507** final QA pairs. ## 🌟 Highlights * **Scene-aware long-video benchmark**: SceneBench targets long, multi-minute videos and emphasizes scene-level understanding rather than isolated frame perception. * **Manually annotated and cleaned data**: All tasks are manually annotated, and ambiguous samples are removed during quality control, resulting in **8,507** final QA pairs used in reported experiments. * **Multimodal scene-level reasoning**: SceneBench evaluates whether models can integrate visual, textual, and audio-related cues across coherent scene units. * **Practical long-video evaluation**: The benchmark is designed for realistic long-video settings where simply increasing the number of input frames may introduce irrelevant visual noise or exceed model memory limits.