Datasets:
Add files using upload-large-folder tool
Browse files- README.md +115 -192
- merged_train_metadata.json +2 -2
README.md
CHANGED
|
@@ -6,26 +6,58 @@ tags:
|
|
| 6 |
- synthetic
|
| 7 |
- object-counting
|
| 8 |
- visual-counting
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
# KubriCount
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
```text
|
| 20 |
.
|
| 21 |
├── README.md
|
|
|
|
|
|
|
| 22 |
├── metadata/
|
| 23 |
│ ├── all_pass_scenes.jsonl
|
| 24 |
│ ├── train_pass_scenes.jsonl
|
| 25 |
│ ├── testA_pass_scenes.jsonl
|
| 26 |
│ ├── testB_pass_scenes.jsonl
|
| 27 |
-
│
|
| 28 |
-
│ └── dataset_stats.json
|
| 29 |
├── shards/
|
| 30 |
│ ├── train/
|
| 31 |
│ │ ├── train-000000.tar
|
|
@@ -33,167 +65,114 @@ Each annotation item describes one image, the target category to count, positive
|
|
| 33 |
│ │ └── ...
|
| 34 |
│ ├── testA/
|
| 35 |
│ │ ├── testA-000000.tar
|
| 36 |
-
│ │ └── .
|
| 37 |
│ └── testB/
|
| 38 |
│ ├── testB-000000.tar
|
| 39 |
-
│ └── .
|
| 40 |
├── train/
|
| 41 |
-
│
|
| 42 |
-
│ └── other train-level json files
|
| 43 |
├── testA/
|
| 44 |
-
│
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
│ ├── vlm_filter_results.json
|
| 48 |
-
│ └── other testB-level json files
|
| 49 |
-
└── other dataset-level json files
|
| 50 |
```
|
| 51 |
|
| 52 |
-
The
|
| 53 |
|
| 54 |
```text
|
| 55 |
train/level5/20260205_135900/scene_0431/edited_00000.png
|
| 56 |
-
train/level5/20260205_135900/scene_0431/.
|
| 57 |
-
|
|
|
|
| 58 |
```
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
```text
|
| 63 |
-
train, testA, testB
|
| 64 |
-
```
|
| 65 |
-
|
| 66 |
-
## Sharding information
|
| 67 |
|
| 68 |
-
|
| 69 |
-
- Total PASS scenes: `110507`
|
| 70 |
-
- Total files inside scenes: `442028`
|
| 71 |
-
- Total shards: `24`
|
| 72 |
|
| 73 |
-
|
| 74 |
|
| 75 |
```text
|
| 76 |
-
|
| 77 |
```
|
| 78 |
|
| 79 |
-
|
| 80 |
|
| 81 |
-
```
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
```
|
| 87 |
|
| 88 |
-
## Annotation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
A typical annotation item is:
|
| 91 |
|
| 92 |
```json
|
| 93 |
{
|
| 94 |
-
"image_id": "
|
| 95 |
-
"count":
|
| 96 |
"box_examples_coordinates": [
|
| 97 |
-
[[
|
| 98 |
-
[[
|
| 99 |
],
|
| 100 |
"points": [
|
| 101 |
-
[
|
| 102 |
-
[
|
| 103 |
],
|
| 104 |
"H": 1024,
|
| 105 |
"W": 1024,
|
| 106 |
-
"category": "
|
| 107 |
"metadata": {
|
| 108 |
-
"level":
|
| 109 |
"split": "train",
|
| 110 |
"config_file": "/kubric/config_gpt.json"
|
| 111 |
},
|
| 112 |
-
"negative_count":
|
| 113 |
-
"negative_category": "
|
| 114 |
-
"negative_box_examples_coordinates": [
|
| 115 |
-
|
| 116 |
-
],
|
| 117 |
-
"negative_points": [
|
| 118 |
-
[655.068603515625, 951.5546875]
|
| 119 |
-
]
|
| 120 |
}
|
| 121 |
```
|
| 122 |
|
| 123 |
Field meanings:
|
| 124 |
|
| 125 |
-
- `image_id`:
|
| 126 |
- `count`: number of target-category objects.
|
| 127 |
-
- `category`: target category.
|
| 128 |
-
- `box_examples_coordinates`:
|
| 129 |
-
- `points`:
|
| 130 |
- `H`, `W`: image height and width.
|
| 131 |
-
- `metadata.level`:
|
| 132 |
-
- `metadata.split`:
|
| 133 |
-
- `negative_category`: category
|
| 134 |
-
- `negative_count`: number of
|
| 135 |
-
- `negative_box_examples_coordinates`:
|
| 136 |
-
- `negative_points`:
|
| 137 |
-
|
| 138 |
-
The released image files are stored inside tar shards. To locate an image after extraction, use the split-relative part of `image_id`.
|
| 139 |
-
|
| 140 |
-
For example, this original path:
|
| 141 |
-
|
| 142 |
-
```text
|
| 143 |
-
/mnt/vision_user/changliu/kubric/count_data_best/train/level5/20260205_135900/scene_0431/edited_00000.png
|
| 144 |
-
```
|
| 145 |
-
|
| 146 |
-
corresponds to the extracted path:
|
| 147 |
-
|
| 148 |
-
```text
|
| 149 |
-
train/level5/20260205_135900/scene_0431/edited_00000.png
|
| 150 |
-
```
|
| 151 |
-
|
| 152 |
-
## PASS filtering
|
| 153 |
-
|
| 154 |
-
The original dataset was filtered using `vlm_filter_results.json`.
|
| 155 |
-
|
| 156 |
-
Only scenes whose annotation value is `PASS` are included in the tar shards. For example:
|
| 157 |
-
|
| 158 |
-
```json
|
| 159 |
-
{
|
| 160 |
-
"level1/20260205_132729/scene_0046": "PASS"
|
| 161 |
-
}
|
| 162 |
-
```
|
| 163 |
|
| 164 |
-
|
| 165 |
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
Each line in `metadata/all_pass_scenes.jsonl` describes one released scene. Example:
|
| 169 |
|
| 170 |
```json
|
| 171 |
{
|
| 172 |
-
"split": "
|
| 173 |
-
"scene": "level1/
|
| 174 |
-
"path_in_dataset": "
|
| 175 |
-
"shard": "shards/
|
| 176 |
"num_files": 4,
|
| 177 |
"files": [
|
| 178 |
{
|
| 179 |
-
"path": "
|
| 180 |
"name": "edited_00000.png",
|
| 181 |
-
"size_bytes":
|
| 182 |
-
},
|
| 183 |
-
{
|
| 184 |
-
"path": "train/level1/20260205_132641/scene_0001/metadata.json",
|
| 185 |
-
"name": "metadata.json",
|
| 186 |
-
"size_bytes": 459308
|
| 187 |
-
},
|
| 188 |
-
{
|
| 189 |
-
"path": "train/level1/20260205_132641/scene_0001/rgba_00000.png",
|
| 190 |
-
"name": "rgba_00000.png",
|
| 191 |
-
"size_bytes": 1660110
|
| 192 |
-
},
|
| 193 |
-
{
|
| 194 |
-
"path": "train/level1/20260205_132641/scene_0001/segmentation_00000.png",
|
| 195 |
-
"name": "segmentation_00000.png",
|
| 196 |
-
"size_bytes": 61694
|
| 197 |
}
|
| 198 |
]
|
| 199 |
}
|
|
@@ -214,7 +193,7 @@ Important fields:
|
|
| 214 |
from huggingface_hub import snapshot_download
|
| 215 |
|
| 216 |
snapshot_download(
|
| 217 |
-
repo_id="
|
| 218 |
repo_type="dataset",
|
| 219 |
local_dir="./KubriCount",
|
| 220 |
)
|
|
@@ -223,19 +202,19 @@ snapshot_download(
|
|
| 223 |
Command line:
|
| 224 |
|
| 225 |
```bash
|
| 226 |
-
huggingface-cli download
|
| 227 |
--repo-type dataset \
|
| 228 |
--local-dir ./KubriCount
|
| 229 |
```
|
| 230 |
|
| 231 |
-
## Restore the
|
| 232 |
|
| 233 |
-
Use the following script to extract
|
| 234 |
|
| 235 |
```python
|
| 236 |
from pathlib import Path
|
| 237 |
-
import tarfile
|
| 238 |
import shutil
|
|
|
|
| 239 |
|
| 240 |
|
| 241 |
repo_dir = Path("./KubriCount")
|
|
@@ -246,37 +225,28 @@ restore_dir.mkdir(parents=True, exist_ok=True)
|
|
| 246 |
|
| 247 |
|
| 248 |
def safe_extract(tar, path):
|
| 249 |
-
path =
|
| 250 |
-
|
| 251 |
for member in tar.getmembers():
|
| 252 |
target = (path / member.name).resolve()
|
| 253 |
-
if not
|
| 254 |
raise RuntimeError(f"Unsafe path in tar: {member.name}")
|
| 255 |
-
|
| 256 |
tar.extractall(path)
|
| 257 |
|
| 258 |
|
| 259 |
-
# Extract scene folders.
|
| 260 |
for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
|
| 261 |
print(f"Extracting {tar_path}")
|
| 262 |
with tarfile.open(tar_path, "r") as tar:
|
| 263 |
safe_extract(tar, restore_dir)
|
| 264 |
|
| 265 |
-
|
| 266 |
-
# Copy dataset-level JSON files.
|
| 267 |
for p in repo_dir.glob("*.json"):
|
| 268 |
shutil.copy2(p, restore_dir / p.name)
|
| 269 |
|
| 270 |
-
|
| 271 |
-
# Copy split-level JSON files.
|
| 272 |
for split in splits:
|
| 273 |
src_split_dir = repo_dir / split
|
| 274 |
dst_split_dir = restore_dir / split
|
| 275 |
dst_split_dir.mkdir(parents=True, exist_ok=True)
|
| 276 |
-
|
| 277 |
-
|
| 278 |
-
for p in src_split_dir.glob("*.json"):
|
| 279 |
-
shutil.copy2(p, dst_split_dir / p.name)
|
| 280 |
|
| 281 |
print(f"Restored dataset to: {restore_dir}")
|
| 282 |
```
|
|
@@ -286,58 +256,19 @@ After extraction:
|
|
| 286 |
```text
|
| 287 |
KubriCount_restored/
|
| 288 |
├── train/
|
| 289 |
-
│ ├──
|
| 290 |
│ └── level1/
|
| 291 |
-
│ └── level2/
|
| 292 |
-
│ └── ...
|
| 293 |
├── testA/
|
| 294 |
-
│ ├──
|
| 295 |
│ └── level1/
|
| 296 |
-
│ └── ...
|
| 297 |
├── testB/
|
| 298 |
-
│ ├──
|
| 299 |
│ └── level1/
|
| 300 |
-
|
| 301 |
-
└──
|
| 302 |
```
|
| 303 |
|
| 304 |
-
##
|
| 305 |
-
|
| 306 |
-
Some annotation files may contain absolute original paths in `image_id`. After extraction, convert them to local paths as follows:
|
| 307 |
-
|
| 308 |
-
```python
|
| 309 |
-
from pathlib import Path
|
| 310 |
-
|
| 311 |
-
|
| 312 |
-
def resolve_image_id(image_id, restored_root):
|
| 313 |
-
restored_root = Path(restored_root)
|
| 314 |
-
|
| 315 |
-
parts = Path(image_id).parts
|
| 316 |
-
|
| 317 |
-
for split in ["train", "testA", "testB"]:
|
| 318 |
-
if split in parts:
|
| 319 |
-
idx = parts.index(split)
|
| 320 |
-
rel_path = Path(*parts[idx:])
|
| 321 |
-
return restored_root / rel_path
|
| 322 |
-
|
| 323 |
-
raise ValueError(f"Cannot find split name in image_id: {image_id}")
|
| 324 |
-
|
| 325 |
-
|
| 326 |
-
image_id = "/mnt/vision_user/changliu/kubric/count_data_best/train/level5/20260205_135900/scene_0431/edited_00000.png"
|
| 327 |
-
local_path = resolve_image_id(image_id, "./KubriCount_restored")
|
| 328 |
-
|
| 329 |
-
print(local_path)
|
| 330 |
-
```
|
| 331 |
-
|
| 332 |
-
Output:
|
| 333 |
-
|
| 334 |
-
```text
|
| 335 |
-
KubriCount_restored/train/level5/20260205_135900/scene_0431/edited_00000.png
|
| 336 |
-
```
|
| 337 |
-
|
| 338 |
-
## Read images directly from tar shards
|
| 339 |
-
|
| 340 |
-
If you do not want to extract the full dataset, you can read files directly from tar shards.
|
| 341 |
|
| 342 |
```python
|
| 343 |
from pathlib import Path
|
|
@@ -349,26 +280,18 @@ repo_dir = Path("./KubriCount")
|
|
| 349 |
for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
|
| 350 |
with tarfile.open(tar_path, "r") as tar:
|
| 351 |
for member in tar:
|
| 352 |
-
if
|
| 353 |
-
|
| 354 |
-
|
| 355 |
-
if member.name.endswith(".png") or member.name.endswith(".jpg") or member.name.endswith(".jpeg"):
|
| 356 |
-
f = tar.extractfile(member)
|
| 357 |
-
data = f.read()
|
| 358 |
print(member.name, len(data))
|
| 359 |
break
|
| 360 |
```
|
| 361 |
|
| 362 |
-
To find
|
| 363 |
-
|
| 364 |
-
```text
|
| 365 |
-
metadata/all_pass_scenes.jsonl
|
| 366 |
-
```
|
| 367 |
|
| 368 |
-
##
|
| 369 |
|
| 370 |
-
|
| 371 |
|
| 372 |
## Contact
|
| 373 |
|
| 374 |
-
|
|
|
|
| 6 |
- synthetic
|
| 7 |
- object-counting
|
| 8 |
- visual-counting
|
| 9 |
+
- multi-grained-counting
|
| 10 |
+
- tar
|
| 11 |
+
- shards
|
| 12 |
---
|
| 13 |
|
| 14 |
# KubriCount
|
| 15 |
|
| 16 |
+
KubriCount is a large-scale synthetic benchmark for **multi-grained visual counting**, built for the research project **Count Anything at Any Granularity**.
|
| 17 |
|
| 18 |
+
The dataset targets open-world counting settings where the intended counting granularity must be explicit. A query may ask for a specific identity, an attribute variant, a category, an instance type, or a broader concept. KubriCount is generated with controllable 3D synthesis, mask-conditioned image editing, and VLM-based filtering, and provides dense instance-level supervision for training and evaluation.
|
| 19 |
|
| 20 |
+
## Highlights
|
| 21 |
+
|
| 22 |
+
- **Five counting granularities**: identity, attribute, category, instance type, and concept.
|
| 23 |
+
- **Controlled distractors** for testing prompt following under fine-grained distinctions.
|
| 24 |
+
- **Dense supervision** including counts, center points, 2D boxes, negative categories, and scene-level metadata.
|
| 25 |
+
- **Large scale**: 110,507 released scenes/images, 157 categories, about 7.3M annotated objects, and up to 250 objects per image.
|
| 26 |
+
- **Generalization splits** for seen categories, unseen assets, and unseen categories.
|
| 27 |
+
|
| 28 |
+
## Splits
|
| 29 |
+
|
| 30 |
+
| Split | Released scenes | Purpose |
|
| 31 |
+
| --- | ---: | --- |
|
| 32 |
+
| `train` | 99,639 | Training split with seen categories. |
|
| 33 |
+
| `testA` | 5,462 | Evaluation split with unseen assets from training categories. |
|
| 34 |
+
| `testB` | 5,406 | Evaluation split with unseen categories. |
|
| 35 |
+
|
| 36 |
+
The released tar shards contain only scenes that passed the automatic quality filter.
|
| 37 |
+
|
| 38 |
+
## Counting Levels
|
| 39 |
+
|
| 40 |
+
| Level | Granularity | Description |
|
| 41 |
+
| --- | --- | --- |
|
| 42 |
+
| L1 | Identity-level | Count all instances of a single object type. |
|
| 43 |
+
| L2 | Attribute-level | Count objects distinguished by size or color. |
|
| 44 |
+
| L3 | Category-level | Count one category while excluding another category. |
|
| 45 |
+
| L4 | Instance-level | Count one instance type within the same category. |
|
| 46 |
+
| L5 | Concept-level | Count a category or concept with multiple instance types and plausible distractors. |
|
| 47 |
+
|
| 48 |
+
## Dataset Structure
|
| 49 |
|
| 50 |
```text
|
| 51 |
.
|
| 52 |
├── README.md
|
| 53 |
+
├── merged_train_metadata.json
|
| 54 |
+
├── merged_test_metadata.json
|
| 55 |
├── metadata/
|
| 56 |
│ ├── all_pass_scenes.jsonl
|
| 57 |
│ ├── train_pass_scenes.jsonl
|
| 58 |
│ ├── testA_pass_scenes.jsonl
|
| 59 |
│ ├── testB_pass_scenes.jsonl
|
| 60 |
+
│ └── shards.jsonl
|
|
|
|
| 61 |
├── shards/
|
| 62 |
│ ├── train/
|
| 63 |
│ │ ├── train-000000.tar
|
|
|
|
| 65 |
│ │ └── ...
|
| 66 |
│ ├── testA/
|
| 67 |
│ │ ├── testA-000000.tar
|
| 68 |
+
│ │ └── testA-000001.tar
|
| 69 |
│ └── testB/
|
| 70 |
│ ├── testB-000000.tar
|
| 71 |
+
│ └── testB-000001.tar
|
| 72 |
├── train/
|
| 73 |
+
│ └── extracted_metadata.json
|
|
|
|
| 74 |
├── testA/
|
| 75 |
+
│ └── extracted_metadata.json
|
| 76 |
+
└── testB/
|
| 77 |
+
└── extracted_metadata.json
|
|
|
|
|
|
|
|
|
|
| 78 |
```
|
| 79 |
|
| 80 |
+
The image folders are stored inside tar shards. Each tar preserves the split/level/timestamp/scene structure:
|
| 81 |
|
| 82 |
```text
|
| 83 |
train/level5/20260205_135900/scene_0431/edited_00000.png
|
| 84 |
+
train/level5/20260205_135900/scene_0431/metadata.json
|
| 85 |
+
train/level5/20260205_135900/scene_0431/rgba_00000.png
|
| 86 |
+
train/level5/20260205_135900/scene_0431/segmentation_00000.png
|
| 87 |
```
|
| 88 |
|
| 89 |
+
The release intentionally does **not** include `metadata/dataset_stats.json` or per-split `vlm_filter_results.json` files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
+
## Path Convention
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
+
All KubriCount image paths in the released annotation files are relative paths. For example:
|
| 94 |
|
| 95 |
```text
|
| 96 |
+
testA/level1/20260205_132725/scene_0213/edited_00000.png
|
| 97 |
```
|
| 98 |
|
| 99 |
+
After extracting the tar shards into a local directory, resolve an `image_id` with:
|
| 100 |
|
| 101 |
+
```python
|
| 102 |
+
from pathlib import Path
|
| 103 |
+
|
| 104 |
+
root = Path("./KubriCount_restored")
|
| 105 |
+
image_path = root / "testA/level1/20260205_132725/scene_0213/edited_00000.png"
|
| 106 |
```
|
| 107 |
|
| 108 |
+
## Annotation Files
|
| 109 |
+
|
| 110 |
+
- `train/extracted_metadata.json`, `testA/extracted_metadata.json`, `testB/extracted_metadata.json`: split-level KubriCount annotations.
|
| 111 |
+
- `merged_train_metadata.json`: merged KubriCount training metadata.
|
| 112 |
+
- `merged_test_metadata.json`: combined test metadata for `testA` and `testB`.
|
| 113 |
+
- `metadata/*_pass_scenes.jsonl`: scene-to-shard manifests.
|
| 114 |
+
- `metadata/shards.jsonl`: one record per tar shard.
|
| 115 |
|
| 116 |
A typical annotation item is:
|
| 117 |
|
| 118 |
```json
|
| 119 |
{
|
| 120 |
+
"image_id": "train/level1/20260205_132641/scene_0001/edited_00000.png",
|
| 121 |
+
"count": 104,
|
| 122 |
"box_examples_coordinates": [
|
| 123 |
+
[[742, 933], [742, 1024], [850, 1024], [850, 933]],
|
| 124 |
+
[[699, 782], [699, 888], [797, 888], [797, 782]]
|
| 125 |
],
|
| 126 |
"points": [
|
| 127 |
+
[796.0, 978.5],
|
| 128 |
+
[748.0, 835.0]
|
| 129 |
],
|
| 130 |
"H": 1024,
|
| 131 |
"W": 1024,
|
| 132 |
+
"category": "shoe",
|
| 133 |
"metadata": {
|
| 134 |
+
"level": 1,
|
| 135 |
"split": "train",
|
| 136 |
"config_file": "/kubric/config_gpt.json"
|
| 137 |
},
|
| 138 |
+
"negative_count": 0,
|
| 139 |
+
"negative_category": "",
|
| 140 |
+
"negative_box_examples_coordinates": [],
|
| 141 |
+
"negative_points": []
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
}
|
| 143 |
```
|
| 144 |
|
| 145 |
Field meanings:
|
| 146 |
|
| 147 |
+
- `image_id`: relative path to the edited image after shard extraction.
|
| 148 |
- `count`: number of target-category objects.
|
| 149 |
+
- `category`: target category or target phrase.
|
| 150 |
+
- `box_examples_coordinates`: target-object 2D boxes represented by four corner points.
|
| 151 |
+
- `points`: target-object center points.
|
| 152 |
- `H`, `W`: image height and width.
|
| 153 |
+
- `metadata.level`: counting granularity level.
|
| 154 |
+
- `metadata.split`: dataset split.
|
| 155 |
+
- `negative_category`: distractor category or phrase, when applicable.
|
| 156 |
+
- `negative_count`: number of distractor objects.
|
| 157 |
+
- `negative_box_examples_coordinates`: distractor-object 2D boxes.
|
| 158 |
+
- `negative_points`: distractor-object center points.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
|
| 160 |
+
## Manifest Format
|
| 161 |
|
| 162 |
+
Each line in `metadata/all_pass_scenes.jsonl` describes one released scene and where it is stored:
|
|
|
|
|
|
|
| 163 |
|
| 164 |
```json
|
| 165 |
{
|
| 166 |
+
"split": "testA",
|
| 167 |
+
"scene": "level1/20260205_132725/scene_0001",
|
| 168 |
+
"path_in_dataset": "testA/level1/20260205_132725/scene_0001",
|
| 169 |
+
"shard": "shards/testA/testA-000000.tar",
|
| 170 |
"num_files": 4,
|
| 171 |
"files": [
|
| 172 |
{
|
| 173 |
+
"path": "testA/level1/20260205_132725/scene_0001/edited_00000.png",
|
| 174 |
"name": "edited_00000.png",
|
| 175 |
+
"size_bytes": 1562567
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 176 |
}
|
| 177 |
]
|
| 178 |
}
|
|
|
|
| 193 |
from huggingface_hub import snapshot_download
|
| 194 |
|
| 195 |
snapshot_download(
|
| 196 |
+
repo_id="liuchang666/KubriCount",
|
| 197 |
repo_type="dataset",
|
| 198 |
local_dir="./KubriCount",
|
| 199 |
)
|
|
|
|
| 202 |
Command line:
|
| 203 |
|
| 204 |
```bash
|
| 205 |
+
huggingface-cli download liuchang666/KubriCount \
|
| 206 |
--repo-type dataset \
|
| 207 |
--local-dir ./KubriCount
|
| 208 |
```
|
| 209 |
|
| 210 |
+
## Restore the Folder Structure
|
| 211 |
|
| 212 |
+
Use the following script to extract the tar shards and copy the annotation JSON files to a restored directory:
|
| 213 |
|
| 214 |
```python
|
| 215 |
from pathlib import Path
|
|
|
|
| 216 |
import shutil
|
| 217 |
+
import tarfile
|
| 218 |
|
| 219 |
|
| 220 |
repo_dir = Path("./KubriCount")
|
|
|
|
| 225 |
|
| 226 |
|
| 227 |
def safe_extract(tar, path):
|
| 228 |
+
path = path.resolve()
|
|
|
|
| 229 |
for member in tar.getmembers():
|
| 230 |
target = (path / member.name).resolve()
|
| 231 |
+
if path not in target.parents and target != path:
|
| 232 |
raise RuntimeError(f"Unsafe path in tar: {member.name}")
|
|
|
|
| 233 |
tar.extractall(path)
|
| 234 |
|
| 235 |
|
|
|
|
| 236 |
for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
|
| 237 |
print(f"Extracting {tar_path}")
|
| 238 |
with tarfile.open(tar_path, "r") as tar:
|
| 239 |
safe_extract(tar, restore_dir)
|
| 240 |
|
|
|
|
|
|
|
| 241 |
for p in repo_dir.glob("*.json"):
|
| 242 |
shutil.copy2(p, restore_dir / p.name)
|
| 243 |
|
|
|
|
|
|
|
| 244 |
for split in splits:
|
| 245 |
src_split_dir = repo_dir / split
|
| 246 |
dst_split_dir = restore_dir / split
|
| 247 |
dst_split_dir.mkdir(parents=True, exist_ok=True)
|
| 248 |
+
for p in src_split_dir.glob("*.json"):
|
| 249 |
+
shutil.copy2(p, dst_split_dir / p.name)
|
|
|
|
|
|
|
| 250 |
|
| 251 |
print(f"Restored dataset to: {restore_dir}")
|
| 252 |
```
|
|
|
|
| 256 |
```text
|
| 257 |
KubriCount_restored/
|
| 258 |
├── train/
|
| 259 |
+
│ ├── extracted_metadata.json
|
| 260 |
│ └── level1/
|
|
|
|
|
|
|
| 261 |
├── testA/
|
| 262 |
+
│ ├── extracted_metadata.json
|
| 263 |
│ └── level1/
|
|
|
|
| 264 |
├── testB/
|
| 265 |
+
│ ├── extracted_metadata.json
|
| 266 |
│ └── level1/
|
| 267 |
+
├── merged_train_metadata.json
|
| 268 |
+
└── merged_test_metadata.json
|
| 269 |
```
|
| 270 |
|
| 271 |
+
## Read Images Directly From Tar Shards
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 272 |
|
| 273 |
```python
|
| 274 |
from pathlib import Path
|
|
|
|
| 280 |
for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
|
| 281 |
with tarfile.open(tar_path, "r") as tar:
|
| 282 |
for member in tar:
|
| 283 |
+
if member.isfile() and member.name.endswith(".png"):
|
| 284 |
+
data = tar.extractfile(member).read()
|
|
|
|
|
|
|
|
|
|
|
|
|
| 285 |
print(member.name, len(data))
|
| 286 |
break
|
| 287 |
```
|
| 288 |
|
| 289 |
+
To find the shard for a specific scene, use `metadata/all_pass_scenes.jsonl`.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 290 |
|
| 291 |
+
## Related Project
|
| 292 |
|
| 293 |
+
Research project: **Count Anything at Any Granularity**.
|
| 294 |
|
| 295 |
## Contact
|
| 296 |
|
| 297 |
+
For questions, please contact liuchang666@sjtu.edu.cn.
|
merged_train_metadata.json
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ef4eb4940054be554bcc39697cb8778323daca44755630ae7522af017b2f3cc
|
| 3 |
+
size 3656119857
|