--- license: apache-2.0 task_categories: - video-text-to-text - visual-question-answering - image-text-to-text language: - en tags: - llava - multimodal - video - spatial-reasoning size_categories: - 10M10-minute video clips | | `spatial/` | WebDataset (`.tar`) | 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) | | `mid_training_video/mapping/mapping_{5s,10s,30s,60s,180s,gt10min}.csv` | CSV | Maps each video clip's `dst_path` to its source `youtube_id` and `[start_time, end_time]` window | ## Preview Configs The `viewer_*` configs above expose small Parquet samples so the Hugging Face Dataset Viewer can render the data directly in the browser: - **`viewer_caption_30s`** — 5 caption samples from 30-second clips - **`viewer_caption_60s`** — 5 caption samples from 60-second clips - **`viewer_caption_180s`** — 3 caption samples from 180-second clips - **`viewer_caption_gt10min`** — 1 caption sample from >10-minute clips - **`viewer_spatial`** — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer) These previews are intended for **schema inspection only**. For training, use the full `mid_training_video/` and `spatial/` shards.