Dataset Viewer
The dataset viewer is not available for this subset.
Job manager crashed while running this job (missing heartbeats).

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Summary

MoSu (Most Replayed Multimodal Video Summarization) is the first large-scale multimodal video summarization dataset. It provides synchronized visual, audio, and text features for 52,678 in-the-wild videos. The ground-truth annotations are based on YouTube's "Most Replayed" statistics, offering highly reliable per-frame importance scores derived from collective viewer engagement.

Dataset Structure

The dataset consists of 6 core files providing metadata, multimodal features, ground truth annotations, and evaluation splits.

1. Metadata (mosu_metadata.csv)

Contains the foundational information for all 52,678 videos.

  • video_id: The unique identifier for the video. This serves as the universal key to access data in all .h5 files and the split JSON.
  • youtube_id: The original YouTube video ID. The video can be accessed via https://www.youtube.com/watch?v={youtube_id}.
  • duration: The length of the video in seconds.
  • views: The total view count of the video.
  • labels: Original multi-label annotations provided by the YouTube-8M dataset.
  • cluster_id: One of 10 semantic clusters (0-9). These clusters were generated based on metadata to group videos by topic (e.g., Video Games, Sports) and ensure a balanced distribution across dataset splits. For more details, please refer to the original paper.

2. Multimodal Features (.h5 files)

Pre-extracted features for all three modalities. Each file is provided in HDF5 format and is approximately 40GB in size. All features have a shape of (N, D), where N corresponds to the video's duration (in seconds) indicated in the metadata, and D is 768 for all modalities.

  • mosu_feat_visual_clip.h5: Visual features extracted using CLIP.
  • mosu_feat_audio_ast.h5: Audio features extracted using Audio Spectrogram Transformer (AST).
  • mosu_feat_text_roberta.h5: Text features extracted using RoBERTa.

File Size & Downloading: The feature files are extremely large (~40GB each). Depending on network conditions, downloading may take a considerable amount of time.

3. Ground Truth (mosu_gt.h5)

An HDF5 file containing the summarization labels for all 52,678 videos. Each video_id (e.g., '005O') maps to an HDF5 Group containing four specific keys:

  • change_points: Temporal boundaries for video shots.
  • cluster_id: The semantic cluster ID of the video.
  • gt_score: Frame-level ground-truth importance scores.
  • gt_summary: Binary labels indicating whether a frame is included in the final summary.

4. Dataset Splits (mosu_split.json)

Contains standardized splits for training, validation, and testing. The split ratio strictly maintains the proportional representation of each cluster_id for a balanced evaluation.

  • train_keys: List of video IDs for 42,152 training videos.
  • val_keys: List of video IDs for 5,263 validation videos.
  • test_keys: List of video IDs for 5,263 testing videos.

Citation

If you use the MoSu dataset or the TripleSumm model in your research, please cite the following paper:

@inproceedings{triplesumm2026,
  title={TripleSumm: Adaptive Triple-Modality Fusion for Video Summarization},
  author={Kim, Sumin and Jeong, Hyemin and Kang, Mingu and Kim, Yejin and Oh, Yoori and Lee, Joonseok},
  booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
  year={2026}
}
Downloads last month
173

Paper for hminjeong/TripleSumm-MoSu