Add README with viewer configs and dataset documentation
Browse files
README.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- video-text-to-text
|
| 5 |
+
- visual-question-answering
|
| 6 |
+
- image-text-to-text
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
tags:
|
| 10 |
+
- llava
|
| 11 |
+
- multimodal
|
| 12 |
+
- video
|
| 13 |
+
- spatial-reasoning
|
| 14 |
+
size_categories:
|
| 15 |
+
- 10M<n<100M
|
| 16 |
+
configs:
|
| 17 |
+
- config_name: viewer_caption_30s
|
| 18 |
+
data_files:
|
| 19 |
+
- split: preview
|
| 20 |
+
path: viewer/caption_30s.parquet
|
| 21 |
+
- config_name: viewer_caption_60s
|
| 22 |
+
data_files:
|
| 23 |
+
- split: preview
|
| 24 |
+
path: viewer/caption_60s.parquet
|
| 25 |
+
- config_name: viewer_spatial
|
| 26 |
+
data_files:
|
| 27 |
+
- split: preview
|
| 28 |
+
path: viewer/spatial.parquet
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
# LLaVA-OneVision-2-Data
|
| 32 |
+
|
| 33 |
+
Training data for the LLaVA-OneVision-2 multimodal model family, covering large-scale video and spatial reasoning corpora used in mid-training.
|
| 34 |
+
|
| 35 |
+
## Dataset Composition
|
| 36 |
+
|
| 37 |
+
| Subset | Format | Description |
|
| 38 |
+
|---|---|---|
|
| 39 |
+
| `mid_training_video/60s_rest/` | WebDataset (`.tar`) | 10,809 shards of ~60s video clips |
|
| 40 |
+
| `mid_training_video/caption_v0/split_30s.jsonl` | JSONL | Captions for 30-second video clips |
|
| 41 |
+
| `mid_training_video/caption_v0/split_60s.jsonl` | JSONL | Captions for 60-second video clips |
|
| 42 |
+
| `spatial/` | WebDataset (`.tar`) | 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) |
|
| 43 |
+
|
| 44 |
+
## Preview Configs
|
| 45 |
+
|
| 46 |
+
The three `viewer_*` configs above expose small Parquet samples so the Hugging Face Dataset Viewer can render the data directly in the browser:
|
| 47 |
+
|
| 48 |
+
- **`viewer_caption_30s`** — 5 caption samples from 30-second clips
|
| 49 |
+
- **`viewer_caption_60s`** — 5 caption samples from 60-second clips
|
| 50 |
+
- **`viewer_spatial`** — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)
|
| 51 |
+
|
| 52 |
+
These previews are intended for **schema inspection only**. For training, use the full `mid_training_video/` and `spatial/` shards.
|
| 53 |
+
|
| 54 |
+
## Loading Examples
|
| 55 |
+
|
| 56 |
+
### Stream the full spatial WebDataset
|
| 57 |
+
```python
|
| 58 |
+
from datasets import load_dataset
|
| 59 |
+
|
| 60 |
+
ds = load_dataset(
|
| 61 |
+
"mvp-lab/LLaVA-OneVision-2-Data",
|
| 62 |
+
data_files="spatial/train_*.tar",
|
| 63 |
+
split="train",
|
| 64 |
+
streaming=True,
|
| 65 |
+
)
|
| 66 |
+
for sample in ds:
|
| 67 |
+
print(sample)
|
| 68 |
+
break
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
### Load a viewer preview
|
| 72 |
+
```python
|
| 73 |
+
from datasets import load_dataset
|
| 74 |
+
|
| 75 |
+
ds = load_dataset("mvp-lab/LLaVA-OneVision-2-Data", "viewer_spatial", split="preview")
|
| 76 |
+
print(ds[0])
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
## Schema
|
| 80 |
+
|
| 81 |
+
### Caption (`viewer_caption_30s`, `viewer_caption_60s`)
|
| 82 |
+
- `images_source: List[str]` — relative paths to source video files
|
| 83 |
+
- `messages: List[{role, content}]` — chat-format caption prompt and assistant response
|
| 84 |
+
- `preview: str` — flattened text preview
|
| 85 |
+
|
| 86 |
+
### Spatial (`viewer_spatial`)
|
| 87 |
+
- `id: str` — sample id
|
| 88 |
+
- `task: str` — source task name (e.g. `refcoco`, `visual_genome`, `pointing`)
|
| 89 |
+
- `image: Image` — first referenced image, embedded as JPEG bytes (downscaled to max edge 1024)
|
| 90 |
+
- `image_paths: List[str]` — original relative image paths in the full dataset
|
| 91 |
+
- `messages: List[{role, content}]` — chat-format Q/A
|
| 92 |
+
- `preview: str` — flattened text preview
|
| 93 |
+
|
| 94 |
+
## License
|
| 95 |
+
|
| 96 |
+
Apache 2.0
|