xiangan commited on
Commit
99a7511
·
verified ·
1 Parent(s): 525bda9

Simplify README and add 180s/gt10min viewer configs

Browse files
Files changed (1) hide show
  1. README.md +13 -45
README.md CHANGED
@@ -22,6 +22,14 @@ configs:
22
  data_files:
23
  - split: preview
24
  path: viewer/caption_60s.parquet
 
 
 
 
 
 
 
 
25
  - config_name: viewer_spatial
26
  data_files:
27
  - split: preview
@@ -39,58 +47,18 @@ Training data for the LLaVA-OneVision-2 multimodal model family, covering large-
39
  | `mid_training_video/60s_rest/` | WebDataset (`.tar`) | 10,809 shards of ~60s video clips |
40
  | `mid_training_video/caption_v0/split_30s.jsonl` | JSONL | Captions for 30-second video clips |
41
  | `mid_training_video/caption_v0/split_60s.jsonl` | JSONL | Captions for 60-second video clips |
 
 
42
  | `spatial/` | WebDataset (`.tar`) | 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) |
43
 
44
  ## Preview Configs
45
 
46
- The three `viewer_*` configs above expose small Parquet samples so the Hugging Face Dataset Viewer can render the data directly in the browser:
47
 
48
  - **`viewer_caption_30s`** — 5 caption samples from 30-second clips
49
  - **`viewer_caption_60s`** — 5 caption samples from 60-second clips
 
 
50
  - **`viewer_spatial`** — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)
51
 
52
  These previews are intended for **schema inspection only**. For training, use the full `mid_training_video/` and `spatial/` shards.
53
-
54
- ## Loading Examples
55
-
56
- ### Stream the full spatial WebDataset
57
- ```python
58
- from datasets import load_dataset
59
-
60
- ds = load_dataset(
61
- "mvp-lab/LLaVA-OneVision-2-Data",
62
- data_files="spatial/train_*.tar",
63
- split="train",
64
- streaming=True,
65
- )
66
- for sample in ds:
67
- print(sample)
68
- break
69
- ```
70
-
71
- ### Load a viewer preview
72
- ```python
73
- from datasets import load_dataset
74
-
75
- ds = load_dataset("mvp-lab/LLaVA-OneVision-2-Data", "viewer_spatial", split="preview")
76
- print(ds[0])
77
- ```
78
-
79
- ## Schema
80
-
81
- ### Caption (`viewer_caption_30s`, `viewer_caption_60s`)
82
- - `images_source: List[str]` — relative paths to source video files
83
- - `messages: List[{role, content}]` — chat-format caption prompt and assistant response
84
- - `preview: str` — flattened text preview
85
-
86
- ### Spatial (`viewer_spatial`)
87
- - `id: str` — sample id
88
- - `task: str` — source task name (e.g. `refcoco`, `visual_genome`, `pointing`)
89
- - `image: Image` — first referenced image, embedded as JPEG bytes (downscaled to max edge 1024)
90
- - `image_paths: List[str]` — original relative image paths in the full dataset
91
- - `messages: List[{role, content}]` — chat-format Q/A
92
- - `preview: str` — flattened text preview
93
-
94
- ## License
95
-
96
- Apache 2.0
 
22
  data_files:
23
  - split: preview
24
  path: viewer/caption_60s.parquet
25
+ - config_name: viewer_caption_180s
26
+ data_files:
27
+ - split: preview
28
+ path: viewer/caption_180s.parquet
29
+ - config_name: viewer_caption_gt10min
30
+ data_files:
31
+ - split: preview
32
+ path: viewer/caption_gt10min.parquet
33
  - config_name: viewer_spatial
34
  data_files:
35
  - split: preview
 
47
  | `mid_training_video/60s_rest/` | WebDataset (`.tar`) | 10,809 shards of ~60s video clips |
48
  | `mid_training_video/caption_v0/split_30s.jsonl` | JSONL | Captions for 30-second video clips |
49
  | `mid_training_video/caption_v0/split_60s.jsonl` | JSONL | Captions for 60-second video clips |
50
+ | `mid_training_video/caption_v0/split_180s.jsonl` | JSONL | Captions for 180-second video clips |
51
+ | `mid_training_video/caption_v0/split_gt10min.jsonl` | JSONL | Captions for >10-minute video clips |
52
  | `spatial/` | WebDataset (`.tar`) | 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) |
53
 
54
  ## Preview Configs
55
 
56
+ The `viewer_*` configs above expose small Parquet samples so the Hugging Face Dataset Viewer can render the data directly in the browser:
57
 
58
  - **`viewer_caption_30s`** — 5 caption samples from 30-second clips
59
  - **`viewer_caption_60s`** — 5 caption samples from 60-second clips
60
+ - **`viewer_caption_180s`** — 3 caption samples from 180-second clips
61
+ - **`viewer_caption_gt10min`** — 1 caption sample from >10-minute clips
62
  - **`viewer_spatial`** — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)
63
 
64
  These previews are intended for **schema inspection only**. For training, use the full `mid_training_video/` and `spatial/` shards.