xiangan's picture
Update README mapping path to mid_training_video/mapping/
a452deb verified
metadata
license: apache-2.0
task_categories:
  - video-text-to-text
  - visual-question-answering
  - image-text-to-text
language:
  - en
tags:
  - llava
  - multimodal
  - video
  - spatial-reasoning
size_categories:
  - 10M<n<100M
configs:
  - config_name: viewer_caption_30s
    data_files:
      - split: preview
        path: viewer/caption_30s.parquet
  - config_name: viewer_caption_60s
    data_files:
      - split: preview
        path: viewer/caption_60s.parquet
  - config_name: viewer_caption_180s
    data_files:
      - split: preview
        path: viewer/caption_180s.parquet
  - config_name: viewer_caption_gt10min
    data_files:
      - split: preview
        path: viewer/caption_gt10min.parquet
  - config_name: viewer_spatial
    data_files:
      - split: preview
        path: viewer/spatial.parquet

LLaVA-OneVision-2-Data

Training data for the LLaVA-OneVision-2 multimodal model family, covering large-scale video and spatial reasoning corpora used in mid-training.

Dataset Composition

Subset Format Description
mid_training_video/60s_rest/ WebDataset (.tar) 10,809 shards of ~60s video clips
mid_training_video/caption_v0/split_30s.jsonl JSONL Captions for 30-second video clips
mid_training_video/caption_v0/split_60s.jsonl JSONL Captions for 60-second video clips
mid_training_video/caption_v0/split_180s.jsonl JSONL Captions for 180-second video clips
mid_training_video/caption_v0/split_gt10min.jsonl JSONL Captions for >10-minute video clips
spatial/ WebDataset (.tar) 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.)
mid_training_video/mapping/mapping_{5s,10s,30s,60s,180s,gt10min}.csv CSV Maps each video clip's dst_path to its source youtube_id and [start_time, end_time] window

Preview Configs

The viewer_* configs above expose small Parquet samples so the Hugging Face Dataset Viewer can render the data directly in the browser:

  • viewer_caption_30s — 5 caption samples from 30-second clips
  • viewer_caption_60s — 5 caption samples from 60-second clips
  • viewer_caption_180s — 3 caption samples from 180-second clips
  • viewer_caption_gt10min — 1 caption sample from >10-minute clips
  • viewer_spatial — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)

These previews are intended for schema inspection only. For training, use the full mid_training_video/ and spatial/ shards.