xiangan commited on
Commit
d9351ea
·
verified ·
1 Parent(s): 278e810

Remove 180s and gt10min from README viewer configs

Browse files
Files changed (1) hide show
  1. README.md +0 -12
README.md CHANGED
@@ -22,14 +22,6 @@ configs:
22
  data_files:
23
  - split: preview
24
  path: viewer/caption_60s.parquet
25
- - config_name: viewer_caption_180s
26
- data_files:
27
- - split: preview
28
- path: viewer/caption_180s.parquet
29
- - config_name: viewer_caption_gt10min
30
- data_files:
31
- - split: preview
32
- path: viewer/caption_gt10min.parquet
33
  - config_name: viewer_spatial
34
  data_files:
35
  - split: preview
@@ -47,8 +39,6 @@ Training data for the LLaVA-OneVision-2 multimodal model family, covering large-
47
  | `mid_training_video/60s_rest/` | WebDataset (`.tar`) | 10,809 shards of ~60s video clips |
48
  | `mid_training_video/caption_v0/split_30s.jsonl` | JSONL | Captions for 30-second video clips |
49
  | `mid_training_video/caption_v0/split_60s.jsonl` | JSONL | Captions for 60-second video clips |
50
- | `mid_training_video/caption_v0/split_180s.jsonl` | JSONL | Captions for 180-second video clips |
51
- | `mid_training_video/caption_v0/split_gt10min.jsonl` | JSONL | Captions for >10-minute video clips |
52
  | `spatial/` | WebDataset (`.tar`) | 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) |
53
 
54
  ## Preview Configs
@@ -57,8 +47,6 @@ The `viewer_*` configs above expose small Parquet samples so the Hugging Face Da
57
 
58
  - **`viewer_caption_30s`** — 5 caption samples from 30-second clips
59
  - **`viewer_caption_60s`** — 5 caption samples from 60-second clips
60
- - **`viewer_caption_180s`** — 3 caption samples from 180-second clips
61
- - **`viewer_caption_gt10min`** — 1 caption sample from >10-minute clips
62
  - **`viewer_spatial`** — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)
63
 
64
  These previews are intended for **schema inspection only**. For training, use the full `mid_training_video/` and `spatial/` shards.
 
22
  data_files:
23
  - split: preview
24
  path: viewer/caption_60s.parquet
 
 
 
 
 
 
 
 
25
  - config_name: viewer_spatial
26
  data_files:
27
  - split: preview
 
39
  | `mid_training_video/60s_rest/` | WebDataset (`.tar`) | 10,809 shards of ~60s video clips |
40
  | `mid_training_video/caption_v0/split_30s.jsonl` | JSONL | Captions for 30-second video clips |
41
  | `mid_training_video/caption_v0/split_60s.jsonl` | JSONL | Captions for 60-second video clips |
 
 
42
  | `spatial/` | WebDataset (`.tar`) | 84 shards of spatial reasoning data (refcoco, visual genome, pointing, 3D, etc.) |
43
 
44
  ## Preview Configs
 
47
 
48
  - **`viewer_caption_30s`** — 5 caption samples from 30-second clips
49
  - **`viewer_caption_60s`** — 5 caption samples from 60-second clips
 
 
50
  - **`viewer_spatial`** — 10 spatial-reasoning samples with embedded thumbnail images, mixed across tasks (refcoco, visual genome, pointing, ca1m, osd, crosspoint, erqa, roborefer)
51
 
52
  These previews are intended for **schema inspection only**. For training, use the full `mid_training_video/` and `spatial/` shards.