liuchang666 commited on
Commit
57fd61c
·
verified ·
1 Parent(s): 1987f10

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -58,3 +58,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
61
+ merged_train_metadata.json filter=lfs diff=lfs merge=lfs -text
62
+ merged_test_metadata.json filter=lfs diff=lfs merge=lfs -text
63
+ metadata/train_pass_scenes.jsonl filter=lfs diff=lfs merge=lfs -text
64
+ metadata/all_pass_scenes.jsonl filter=lfs diff=lfs merge=lfs -text
65
+ testB/extracted_metadata.json filter=lfs diff=lfs merge=lfs -text
66
+ train/extracted_metadata.json filter=lfs diff=lfs merge=lfs -text
67
+ testA/extracted_metadata.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,368 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ pretty_name: KubriCount
4
+ tags:
5
+ - image
6
+ - synthetic
7
+ - object-counting
8
+ - visual-counting
9
+ - tar
10
+ - shards
11
+ ---
12
+
13
+ # KubriCount
14
+
15
+ This dataset is a synthetic visual counting dataset.
16
+
17
+ Each annotation item describes one image, the target category to count, positive instance boxes/points, and a negative category with corresponding negative boxes/points. The dataset has been filtered by `vlm_filter_results.json`; only scenes marked as `PASS` are included in the released tar shards.
18
+
19
+ ## Dataset structure
20
+
21
+ ```text
22
+ .
23
+ ├── README.md
24
+ ├── metadata/
25
+ │ ├── all_pass_scenes.jsonl
26
+ │ ├── train_pass_scenes.jsonl
27
+ │ ├── testA_pass_scenes.jsonl
28
+ │ ├── testB_pass_scenes.jsonl
29
+ │ ├── shards.jsonl
30
+ │ └── dataset_stats.json
31
+ ├── shards/
32
+ │ ├── train/
33
+ │ │ ├── train-000000.tar
34
+ │ │ ├── train-000001.tar
35
+ │ │ └── ...
36
+ │ ├── testA/
37
+ │ │ ├── testA-000000.tar
38
+ │ │ └── ...
39
+ │ └── testB/
40
+ │ ├── testB-000000.tar
41
+ │ └── ...
42
+ ├── train/
43
+ │ ├── vlm_filter_results.json
44
+ │ └── other train-level json files
45
+ ├── testA/
46
+ │ ├── vlm_filter_results.json
47
+ │ └── other testA-level json files
48
+ ├── testB/
49
+ │ ├── vlm_filter_results.json
50
+ │ └── other testB-level json files
51
+ └── other dataset-level json files
52
+ ```
53
+
54
+ The scene folders are stored inside tar shards. Each tar file preserves the original split/level/timestamp/scene structure, for example:
55
+
56
+ ```text
57
+ train/level5/20260205_135900/scene_0431/edited_00000.png
58
+ train/level5/20260205_135900/scene_0431/...
59
+ testB/level1/20260205_132729/scene_0046/...
60
+ ```
61
+
62
+ ## Splits
63
+
64
+ ```text
65
+ train, testA, testB
66
+ ```
67
+
68
+ ## Sharding information
69
+
70
+ - Scenes per shard: `5000`
71
+ - Total PASS scenes: `110507`
72
+ - Total files inside scenes: `442028`
73
+ - Total shards: `24`
74
+
75
+ Shard metadata is available in:
76
+
77
+ ```text
78
+ metadata/shards.jsonl
79
+ ```
80
+
81
+ Scene-to-shard metadata is available in:
82
+
83
+ ```text
84
+ metadata/all_pass_scenes.jsonl
85
+ metadata/train_pass_scenes.jsonl
86
+ metadata/testA_pass_scenes.jsonl
87
+ metadata/testB_pass_scenes.jsonl
88
+ ```
89
+
90
+ ## Annotation format
91
+
92
+ A typical annotation item is:
93
+
94
+ ```json
95
+ {
96
+ "image_id": "/mnt/vision_user/changliu/kubric/count_data_best/train/level5/20260205_135900/scene_0431/edited_00000.png",
97
+ "count": 6,
98
+ "box_examples_coordinates": [
99
+ [[[142, 656], [142, 1024], [494, 1024], [494, 656]]],
100
+ [[[230, 610], [230, 945], [632, 945], [632, 610]]]
101
+ ],
102
+ "points": [
103
+ [331.9403381347656, 895.3883056640625],
104
+ [438.8674011230469, 792.9484252929688]
105
+ ],
106
+ "H": 1024,
107
+ "W": 1024,
108
+ "category": "trousers",
109
+ "metadata": {
110
+ "level": 5,
111
+ "split": "train",
112
+ "config_file": "/kubric/config_gpt.json"
113
+ },
114
+ "negative_count": 12,
115
+ "negative_category": "tie",
116
+ "negative_box_examples_coordinates": [
117
+ [[[591, 897], [591, 1008], [723, 1008], [723, 897]]]
118
+ ],
119
+ "negative_points": [
120
+ [655.068603515625, 951.5546875]
121
+ ]
122
+ }
123
+ ```
124
+
125
+ Field meanings:
126
+
127
+ - `image_id`: original image path used when generating the annotation.
128
+ - `count`: number of target-category objects.
129
+ - `category`: target category.
130
+ - `box_examples_coordinates`: bounding boxes for target-category objects. Each box is represented by four corner points.
131
+ - `points`: point annotations for target-category objects.
132
+ - `H`, `W`: image height and width.
133
+ - `metadata.level`: difficulty or generation level.
134
+ - `metadata.split`: data split.
135
+ - `negative_category`: category used as negative reference.
136
+ - `negative_count`: number of negative-category objects.
137
+ - `negative_box_examples_coordinates`: boxes for negative-category objects.
138
+ - `negative_points`: point annotations for negative-category objects.
139
+
140
+ The released image files are stored inside tar shards. To locate an image after extraction, use the split-relative part of `image_id`.
141
+
142
+ For example, this original path:
143
+
144
+ ```text
145
+ /mnt/vision_user/changliu/kubric/count_data_best/train/level5/20260205_135900/scene_0431/edited_00000.png
146
+ ```
147
+
148
+ corresponds to the extracted path:
149
+
150
+ ```text
151
+ train/level5/20260205_135900/scene_0431/edited_00000.png
152
+ ```
153
+
154
+ ## PASS filtering
155
+
156
+ The original dataset was filtered using `vlm_filter_results.json`.
157
+
158
+ Only scenes whose annotation value is `PASS` are included in the tar shards. For example:
159
+
160
+ ```json
161
+ {
162
+ "level1/20260205_132729/scene_0046": "PASS"
163
+ }
164
+ ```
165
+
166
+ Scenes marked as `FAIL` are not included.
167
+
168
+ ## Manifest format
169
+
170
+ Each line in `metadata/all_pass_scenes.jsonl` describes one released scene. Example:
171
+
172
+ ```json
173
+ {
174
+ "split": "train",
175
+ "scene": "level1/20260205_132641/scene_0001",
176
+ "path_in_dataset": "train/level1/20260205_132641/scene_0001",
177
+ "shard": "shards/train/train-000000.tar",
178
+ "num_files": 4,
179
+ "files": [
180
+ {
181
+ "path": "train/level1/20260205_132641/scene_0001/edited_00000.png",
182
+ "name": "edited_00000.png",
183
+ "size_bytes": 2168861
184
+ },
185
+ {
186
+ "path": "train/level1/20260205_132641/scene_0001/metadata.json",
187
+ "name": "metadata.json",
188
+ "size_bytes": 459308
189
+ },
190
+ {
191
+ "path": "train/level1/20260205_132641/scene_0001/rgba_00000.png",
192
+ "name": "rgba_00000.png",
193
+ "size_bytes": 1660110
194
+ },
195
+ {
196
+ "path": "train/level1/20260205_132641/scene_0001/segmentation_00000.png",
197
+ "name": "segmentation_00000.png",
198
+ "size_bytes": 61694
199
+ }
200
+ ]
201
+ }
202
+ ```
203
+
204
+ Important fields:
205
+
206
+ - `split`: dataset split.
207
+ - `scene`: scene path relative to the split folder.
208
+ - `path_in_dataset`: scene path after extraction.
209
+ - `shard`: tar shard containing this scene.
210
+ - `num_files`: number of files in this scene.
211
+ - `files`: files stored for this scene.
212
+
213
+ ## Download
214
+
215
+ ```python
216
+ from huggingface_hub import snapshot_download
217
+
218
+ snapshot_download(
219
+ repo_id="YOUR_USERNAME_OR_ORG/KubriCount",
220
+ repo_type="dataset",
221
+ local_dir="./KubriCount",
222
+ )
223
+ ```
224
+
225
+ Command line:
226
+
227
+ ```bash
228
+ huggingface-cli download YOUR_USERNAME_OR_ORG/KubriCount \
229
+ --repo-type dataset \
230
+ --local-dir ./KubriCount
231
+ ```
232
+
233
+ ## Restore the original folder structure
234
+
235
+ Use the following script to extract all tar shards and copy the JSON files to the restored dataset directory.
236
+
237
+ ```python
238
+ from pathlib import Path
239
+ import tarfile
240
+ import shutil
241
+
242
+
243
+ repo_dir = Path("./KubriCount")
244
+ restore_dir = Path("./KubriCount_restored")
245
+ splits = ["train", "testA", "testB"]
246
+
247
+ restore_dir.mkdir(parents=True, exist_ok=True)
248
+
249
+
250
+ def safe_extract(tar, path):
251
+ path = Path(path).resolve()
252
+
253
+ for member in tar.getmembers():
254
+ target = (path / member.name).resolve()
255
+ if not str(target).startswith(str(path)):
256
+ raise RuntimeError(f"Unsafe path in tar: {member.name}")
257
+
258
+ tar.extractall(path)
259
+
260
+
261
+ # Extract scene folders.
262
+ for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
263
+ print(f"Extracting {tar_path}")
264
+ with tarfile.open(tar_path, "r") as tar:
265
+ safe_extract(tar, restore_dir)
266
+
267
+
268
+ # Copy dataset-level JSON files.
269
+ for p in repo_dir.glob("*.json"):
270
+ shutil.copy2(p, restore_dir / p.name)
271
+
272
+
273
+ # Copy split-level JSON files.
274
+ for split in splits:
275
+ src_split_dir = repo_dir / split
276
+ dst_split_dir = restore_dir / split
277
+ dst_split_dir.mkdir(parents=True, exist_ok=True)
278
+
279
+ if src_split_dir.exists():
280
+ for p in src_split_dir.glob("*.json"):
281
+ shutil.copy2(p, dst_split_dir / p.name)
282
+
283
+ print(f"Restored dataset to: {restore_dir}")
284
+ ```
285
+
286
+ After extraction:
287
+
288
+ ```text
289
+ KubriCount_restored/
290
+ ├── train/
291
+ │ ├── vlm_filter_results.json
292
+ │ └── level1/
293
+ │ └── level2/
294
+ │ └── ...
295
+ ├── testA/
296
+ │ ├── vlm_filter_results.json
297
+ │ └── level1/
298
+ │ └── ...
299
+ ├── testB/
300
+ │ ├── vlm_filter_results.json
301
+ │ └── level1/
302
+ │ └── ...
303
+ └── dataset-level json files
304
+ ```
305
+
306
+ ## Resolve `image_id` to a local extracted image path
307
+
308
+ Some annotation files may contain absolute original paths in `image_id`. After extraction, convert them to local paths as follows:
309
+
310
+ ```python
311
+ from pathlib import Path
312
+
313
+
314
+ def resolve_image_id(image_id, restored_root):
315
+ restored_root = Path(restored_root)
316
+
317
+ parts = Path(image_id).parts
318
+
319
+ for split in ["train", "testA", "testB"]:
320
+ if split in parts:
321
+ idx = parts.index(split)
322
+ rel_path = Path(*parts[idx:])
323
+ return restored_root / rel_path
324
+
325
+ raise ValueError(f"Cannot find split name in image_id: {image_id}")
326
+
327
+
328
+ image_id = "/mnt/vision_user/changliu/kubric/count_data_best/train/level5/20260205_135900/scene_0431/edited_00000.png"
329
+ local_path = resolve_image_id(image_id, "./KubriCount_restored")
330
+
331
+ print(local_path)
332
+ ```
333
+
334
+ Output:
335
+
336
+ ```text
337
+ KubriCount_restored/train/level5/20260205_135900/scene_0431/edited_00000.png
338
+ ```
339
+
340
+ ## Read images directly from tar shards
341
+
342
+ If you do not want to extract the full dataset, you can read files directly from tar shards.
343
+
344
+ ```python
345
+ from pathlib import Path
346
+ import tarfile
347
+
348
+
349
+ repo_dir = Path("./KubriCount")
350
+
351
+ for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
352
+ with tarfile.open(tar_path, "r") as tar:
353
+ for member in tar:
354
+ if not member.isfile():
355
+ continue
356
+
357
+ if member.name.endswith(".png") or member.name.endswith(".jpg") or member.name.endswith(".jpeg"):
358
+ f = tar.extractfile(member)
359
+ data = f.read()
360
+ print(member.name, len(data))
361
+ break
362
+ ```
363
+
364
+ To find which shard contains a specific scene, use:
365
+
366
+ ```text
367
+ metadata/all_pass_scenes.jsonl
368
+ ```
merged_test_metadata.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28fca1e2d9249bb0a9e0c68e76cc512c688a154986ea4df72e66a4935dbe7ba2
3
+ size 252405542
merged_train_metadata.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d227350bc2236864019a9179c48875dde522db5cb5f86e8b9368f318bdaafbe
3
+ size 3681392733
metadata/all_pass_scenes.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc22430afa0b9abf8ae7018bf40bdc3ed29e5107777518111daf687cbbdfe5cd
3
+ size 74990964
metadata/dataset_stats.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "created_at": "2026-04-27T05:34:33",
3
+ "staging_root": "/remote-home/changliu/kubric/count_data_best_pass_staging",
4
+ "out_root": "/remote-home/changliu/kubric/count_data_best_hf",
5
+ "splits": [
6
+ "train",
7
+ "testA",
8
+ "testB"
9
+ ],
10
+ "scenes_per_shard": 5000,
11
+ "total_scenes": 110507,
12
+ "total_missing_scenes": 0,
13
+ "total_files": 442028,
14
+ "total_shards": 24,
15
+ "copied_root_jsons": [
16
+ "/remote-home/changliu/kubric/count_data_best_hf/merged_test_metadata.json",
17
+ "/remote-home/changliu/kubric/count_data_best_hf/merged_train_metadata.json"
18
+ ],
19
+ "copied_split_jsons": {
20
+ "train": [
21
+ "/remote-home/changliu/kubric/count_data_best_hf/train/extracted_metadata.json",
22
+ "/remote-home/changliu/kubric/count_data_best_hf/train/vlm_filter_results.json"
23
+ ],
24
+ "testA": [
25
+ "/remote-home/changliu/kubric/count_data_best_hf/testA/extracted_metadata.json",
26
+ "/remote-home/changliu/kubric/count_data_best_hf/testA/vlm_filter_results.json"
27
+ ],
28
+ "testB": [
29
+ "/remote-home/changliu/kubric/count_data_best_hf/testB/extracted_metadata.json",
30
+ "/remote-home/changliu/kubric/count_data_best_hf/testB/vlm_filter_results.json"
31
+ ]
32
+ }
33
+ }
metadata/shards.jsonl ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"split": "train", "shard_index": 0, "shard": "shards/train/train-000000.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260205_132641/scene_0001", "last_scene": "level1/20260205_133730/scene_0314", "size_bytes": 16583331840}
2
+ {"split": "train", "shard_index": 1, "shard": "shards/train/train-000001.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260205_133730/scene_0316", "last_scene": "level1/20260205_134656/scene_0079", "size_bytes": 16666982400}
3
+ {"split": "train", "shard_index": 2, "shard": "shards/train/train-000002.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260205_134656/scene_0080", "last_scene": "level1/20260205_135403/scene_0419", "size_bytes": 16526315520}
4
+ {"split": "train", "shard_index": 3, "shard": "shards/train/train-000003.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260205_135403/scene_0420", "last_scene": "level1/20260206_112434/scene_0355", "size_bytes": 16856832000}
5
+ {"split": "train", "shard_index": 4, "shard": "shards/train/train-000004.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260206_112434/scene_0356", "last_scene": "level2/20260205_133905/scene_0485", "size_bytes": 16806543360}
6
+ {"split": "train", "shard_index": 5, "shard": "shards/train/train-000005.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level2/20260205_133905/scene_0486", "last_scene": "level2/20260205_135416/scene_0044", "size_bytes": 16814530560}
7
+ {"split": "train", "shard_index": 6, "shard": "shards/train/train-000006.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level2/20260205_135416/scene_0046", "last_scene": "level2/20260206_103155/scene_0096", "size_bytes": 17163059200}
8
+ {"split": "train", "shard_index": 7, "shard": "shards/train/train-000007.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level2/20260206_103155/scene_0097", "last_scene": "level2/20260206_112713/scene_0167", "size_bytes": 17433825280}
9
+ {"split": "train", "shard_index": 8, "shard": "shards/train/train-000008.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level2/20260206_112713/scene_0168", "last_scene": "level3/20260205_134051/scene_0300", "size_bytes": 17477560320}
10
+ {"split": "train", "shard_index": 9, "shard": "shards/train/train-000009.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level3/20260205_134051/scene_0301", "last_scene": "level3/20260205_134941/scene_0003", "size_bytes": 17647165440}
11
+ {"split": "train", "shard_index": 10, "shard": "shards/train/train-000010.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level3/20260205_134941/scene_0004", "last_scene": "level3/20260206_103621/scene_0054", "size_bytes": 18025472000}
12
+ {"split": "train", "shard_index": 11, "shard": "shards/train/train-000011.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level3/20260206_103621/scene_0055", "last_scene": "level4/20260205_133116/scene_0211", "size_bytes": 17500057600}
13
+ {"split": "train", "shard_index": 12, "shard": "shards/train/train-000012.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level4/20260205_133116/scene_0212", "last_scene": "level4/20260205_134225/scene_0473", "size_bytes": 16433203200}
14
+ {"split": "train", "shard_index": 13, "shard": "shards/train/train-000013.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level4/20260205_134225/scene_0474", "last_scene": "level4/20260205_135652/scene_0127", "size_bytes": 16565452800}
15
+ {"split": "train", "shard_index": 14, "shard": "shards/train/train-000014.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level4/20260205_135652/scene_0128", "last_scene": "level4/20260206_103216/scene_0056", "size_bytes": 16704051200}
16
+ {"split": "train", "shard_index": 15, "shard": "shards/train/train-000015.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level4/20260206_103216/scene_0058", "last_scene": "level5/20260205_133224/scene_0011", "size_bytes": 16727388160}
17
+ {"split": "train", "shard_index": 16, "shard": "shards/train/train-000016.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level5/20260205_133224/scene_0012", "last_scene": "level5/20260205_134347/scene_0005", "size_bytes": 17338859520}
18
+ {"split": "train", "shard_index": 17, "shard": "shards/train/train-000017.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level5/20260205_134347/scene_0006", "last_scene": "level5/20260205_135156/scene_0073", "size_bytes": 17400074240}
19
+ {"split": "train", "shard_index": 18, "shard": "shards/train/train-000018.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level5/20260205_135156/scene_0074", "last_scene": "level5/20260206_103220/scene_0134", "size_bytes": 17759703040}
20
+ {"split": "train", "shard_index": 19, "shard": "shards/train/train-000019.tar", "num_scenes": 4639, "num_files": 18556, "first_scene": "level5/20260206_103220/scene_0135", "last_scene": "level5/20260206_145149/scene_0001", "size_bytes": 16402032640}
21
+ {"split": "testA", "shard_index": 0, "shard": "shards/testA/testA-000000.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260205_132725/scene_0001", "last_scene": "level5/20260205_135831/scene_0003", "size_bytes": 16579737600}
22
+ {"split": "testA", "shard_index": 1, "shard": "shards/testA/testA-000001.tar", "num_scenes": 462, "num_files": 1848, "first_scene": "level5/20260205_135831/scene_0004", "last_scene": "level5/20260206_145126/scene_0001", "size_bytes": 1559941120}
23
+ {"split": "testB", "shard_index": 0, "shard": "shards/testB/testB-000000.tar", "num_scenes": 5000, "num_files": 20000, "first_scene": "level1/20260205_132729/scene_0001", "last_scene": "level5/20260205_135834/scene_0036", "size_bytes": 16113049600}
24
+ {"split": "testB", "shard_index": 1, "shard": "shards/testB/testB-000001.tar", "num_scenes": 406, "num_files": 1624, "first_scene": "level5/20260205_135834/scene_0037", "last_scene": "level5/20260206_145130/scene_0001", "size_bytes": 1328527360}
metadata/testA_pass_scenes.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
metadata/testB_pass_scenes.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
metadata/train_pass_scenes.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3a39bb0a0c67568b5d739bd6209921ffac9f49f36d4669f237aac0f33982110
3
+ size 67617671
shards/testA/testA-000000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb168831210a4bce64872b402d01c68621b7f2b60f70f042b3c96c0c8fd18e0a
3
+ size 16579737600
shards/testA/testA-000001.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb15bf9ad33855c041d79c9ad8e651815eccc6e702a8771d55527def2019b6e5
3
+ size 1559941120
shards/testB/testB-000000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:414bae2ce94c7374fc4658963669c9777961d614e23aca634eac1b23260e4601
3
+ size 16113049600
shards/testB/testB-000001.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47476fb9f831dfeabdcc5101479afde77717272120b38ee861d672471b914d26
3
+ size 1328527360
shards/train/train-000000.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2da8449063f498a88fb495d5edd831d56580359926b9ed5b90e06ec16b419c43
3
+ size 16583331840
shards/train/train-000001.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6126e61eae12cb3bd160fb1dd9c4dfe12c9215320169e6a811456d378d194de
3
+ size 16666982400
shards/train/train-000002.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8d2e768330e8d3a364f43a3167ed2ef1a2b4613b85a591ee05230936ec9ad89
3
+ size 16526315520
shards/train/train-000003.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b39c3d7f144fab1a4ef4248d77109d08b23ca0872fc94617fabb748d1c7f762
3
+ size 16856832000
shards/train/train-000004.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f9ec9c325d0be9602666ec1b4e2c3453d9c1cce52763dfddceca8d67e7b2f82
3
+ size 16806543360
shards/train/train-000005.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57d49fe517f4871bf13f85769652f167fde5edc8d80f46daeb55c3bb3183314b
3
+ size 16814530560
shards/train/train-000006.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5697837058013297a81ebbde83c9fad53b7d8a1de163c8350a0d80aa20f08fc6
3
+ size 17163059200
shards/train/train-000007.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b21bff554df8bd6dc3821b3197377f67bd3d68a081f5ccb25faf07e1bd677b6
3
+ size 17433825280
shards/train/train-000008.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4aaaa0679ab2dfbac8da1b2de58efe6d0b172df2167798a1265c193fde70dbc
3
+ size 17477560320
shards/train/train-000009.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4594450a17a833568eeb4b049a1c2fd76f65a23be1c62ab7baf735c1999fcc13
3
+ size 17647165440
shards/train/train-000010.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:790a73ecc6cd2fca1fb66e5a1963002e390204078aeb3ba4670da5d2373c35f3
3
+ size 18025472000
shards/train/train-000011.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cbfebcf5af71155c1723182096f3e0633519cc2f1f2653aea96e0b4785326f0
3
+ size 17500057600
shards/train/train-000012.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46176452d28dd648e737e20bb7fc6043c1ac5d6210b562a8a8758048c184cfcd
3
+ size 16433203200
shards/train/train-000013.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d955fb91bd0d57b750c44a1409523aad4c37ec2e8577322dd42881f4ceb9d43a
3
+ size 16565452800
shards/train/train-000014.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb42c12583ff4df1795c93d20eea92b9b677dd2e83c470ae18926d8c993dc69c
3
+ size 16704051200
shards/train/train-000015.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f28c5753d00835d0b06071694a7c5e0ffdbff78dd123373dce97f2c69b4dfd4
3
+ size 16727388160
shards/train/train-000016.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bf6e1a72d297197774df323da2601e3bc1773782d27a155947dbe8fd30e4d09
3
+ size 17338859520
shards/train/train-000017.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a9b8491808867a4f31d4d5a1913dbaf36cdf15c10370c13f615108e8eeeb7e5
3
+ size 17400074240
shards/train/train-000018.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ca59586b21d551264db94fb4761a63748d0b2b343e7d1ee9b5d265e8b5b9d14
3
+ size 17759703040
shards/train/train-000019.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ed406d742ac985c636e826389b5f3eb5512bf8eefc04093ca750e5b584def4b
3
+ size 16402032640
testA/extracted_metadata.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:452c0489e774d117367f524bed140c42da7983cd47dbb7d7caf397a3e49350bb
3
+ size 148744328
testA/vlm_filter_results.json ADDED
The diff for this file is too large to render. See raw diff
 
testB/extracted_metadata.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a8f0382df126fea86a702c95a6b940c0dc955560aa3afeb33d6b7ea13a62b84
3
+ size 103661216
testB/vlm_filter_results.json ADDED
The diff for this file is too large to render. See raw diff
 
train/extracted_metadata.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2d90c017de12fd270659ddb243d3d15f7a2ccb588437600f49d081e7987de13
3
+ size 3664897716
train/vlm_filter_results.json ADDED
The diff for this file is too large to render. See raw diff