KubriCount / README.md
nielsr's picture
nielsr HF Staff
Add task category, update license and add project links
c5ee9fd verified
|
raw
history blame
9.73 kB
metadata
license: apache-2.0
pretty_name: KubriCount
task_categories:
  - object-detection
tags:
  - image
  - synthetic
  - object-counting
  - visual-counting
  - multi-grained-counting
  - tar
  - shards

KubriCount

Project Page | Paper | Code

KubriCount is a large-scale synthetic benchmark for multi-grained visual counting, built for the research project Count Anything at Any Granularity.

The dataset targets open-world counting settings where the intended counting granularity must be explicit. A query may ask for a specific identity, an attribute variant, a category, an instance type, or a broader concept. KubriCount provides controlled distractors and dense instance-level supervision for training and evaluation.

Companion generation pipeline code is available at Verg-Avesta/KubriCount.

The paper is available at arXiv.

The released dataset can be used directly and does not require running the generation pipeline.

Highlights

  • Five counting granularities: identity, attribute, category, instance type, and concept.
  • Controlled generalization splits for seen categories, unseen assets, and unseen categories.
  • Dense supervision including counts, center points, 2D boxes, masks, negative categories, and scene metadata.
  • Large scale: 110,507 released scenes/images, 198,702 annotation items/queries, 157 categories, about 7.3M annotated objects, and up to 250 objects per image.
  • Automatic data construction with controllable 3D synthesis, mask-conditioned image editing, and VLM-based quality filtering.

Dataset Statistics

Split Released scenes Annotation items / queries Purpose
train 99,639 179,140 Training split with seen categories.
testA 5,462 9,837 Evaluation split with unseen assets from training categories.
testB 5,406 9,725 Evaluation split with unseen categories.
Total 110,507 198,702

Levels 2-5 can yield two annotation items from one image by swapping the target and distractor groups. This is why the number of annotation items is larger than the number of released scenes.

Level Statistics

Level Train normal Train dense TestA TestB Total scenes
L1 16,179 3,959 1,087 1,087 22,312
L2 size 7,582 2,402 569 586 11,139
L2 color 8,043 2,135 600 602 11,380
L3 15,386 3,624 1,053 1,014 21,077
L4 16,493 4,186 1,081 1,081 22,841
L5 15,825 3,825 1,072 1,036 21,758
Total 79,508 20,131 5,462 5,406 110,507

Counting Levels

Each level defines a target set and, when applicable, a controlled distractor set that differs by one semantic factor.

Level Granularity Description
L1 Identity-level Count all visible target instances of a single object type.
L2 Attribute-level Count objects distinguished by size or color while excluding the other attribute variant.
L3 Category-level Count one category while excluding a different category.
L4 Instance-level Count one instance type while excluding another instance type from the same category.
L5 Concept-level Count a category or concept with multiple instance types and plausible distractors.

Generation Pipeline

KubriCount is generated in four stages:

  1. 3D asset curation: build a categorized 3D asset bank from labeled 3D datasets and controllable 3D generation.
  2. Prototype synthesis: use Kubric, PyBullet, and Blender to render controllable multi-object scenes with exact instance metadata.
  3. Consistent image editing: improve visual realism while preserving object topology and annotations.
  4. Automatic data filtering: reject samples with layout drift, count changes, identity corruption, background hallucination, or severe artifacts.

The tar shards in this release contain only scenes that passed the automatic quality filter. The intermediate PASS/FAIL files are not included.

Dataset Structure

.
├── README.md
├── merged_train_metadata.json
├── merged_test_metadata.json
├── metadata/
│   ├── all_pass_scenes.jsonl
│   ├── train_pass_scenes.jsonl
│   ├── testA_pass_scenes.jsonl
│   ├── testB_pass_scenes.jsonl
│   └── shards.jsonl
├── shards/
│   ├── train/
│   │   ├── train-000000.tar
│   │   ├── train-000001.tar
│   │   └── ...
│   ├── testA/
│   │   ├── testA-000000.tar
│   │   └── testA-000001.tar
│   └── testB/
│       ├── testB-000000.tar
│       └── testB-000001.tar
├── train/
│   └── extracted_metadata.json
├── testA/
│   └── extracted_metadata.json
└── testB/
    └── extracted_metadata.json

Files Inside Each Scene

The image folders are stored inside tar shards. Each tar preserves the split/level/timestamp/scene structure:

train/level5/20260205_135900/scene_0431/edited_00000.png
train/level5/20260205_135900/scene_0431/metadata.json
train/level5/20260205_135900/scene_0431/rgba_00000.png
train/level5/20260205_135900/scene_0431/segmentation_00000.png

Typical scene files:

  • edited_00000.png: final edited image used by the benchmark.
  • rgba_00000.png: original rendered RGBA image before editing.
  • segmentation_00000.png: instance segmentation map.
  • metadata.json: scene-level generation metadata, including camera, asset, split, level, and object information.

Path Convention

All KubriCount image paths in the released annotation files are relative paths. For example:

testA/level1/20260205_132725/scene_0213/edited_00000.png

After extracting the tar shards into a local directory, resolve an image_id with:

from pathlib import Path

root = Path("./KubriCount_restored")
image_path = root / "testA/level1/20260205_132725/scene_0213/edited_00000.png"

Annotation Files

  • train/extracted_metadata.json, testA/extracted_metadata.json, testB/extracted_metadata.json: split-level KubriCount annotations.
  • merged_train_metadata.json: merged KubriCount training metadata.
  • merged_test_metadata.json: combined test metadata for testA and testB.
  • metadata/*_pass_scenes.jsonl: scene-to-shard manifests.
  • metadata/shards.jsonl: one record per tar shard.

A typical annotation item is:

{
  "image_id": "train/level1/20260205_132641/scene_0001/edited_00000.png",
  "count": 104,
  "box_examples_coordinates": [
    [[742, 933], [742, 1024], [850, 1024], [850, 933]],
    [[699, 782], [699, 888], [797, 888], [797, 782]]
  ],
  "points": [
    [796.0, 978.5],
    [748.0, 835.0]
  ],
  "H": 1024,
  "W": 1024,
  "category": "shoe",
  "metadata": {
    "level": 1,
    "split": "train",
    "config_file": "/kubric/config_gpt.json"
  },
  "negative_count": 0,
  "negative_category": "",
  "negative_box_examples_coordinates": [],
  "negative_points": []
}

Download

from huggingface_hub import snapshot_download

snapshot_download(
    repo_id="liuchang666/KubriCount",
    repo_type="dataset",
    local_dir="./KubriCount",
)

Command line:

huggingface-cli download liuchang666/KubriCount \
  --repo-type dataset \
  --local-dir ./KubriCount

Restore the Folder Structure

Use the following script to extract the tar shards and copy the annotation JSON files to a restored directory:

from pathlib import Path
import shutil
import tarfile


repo_dir = Path("./KubriCount")
restore_dir = Path("./KubriCount_restored")
splits = ["train", "testA", "testB"]

restore_dir.mkdir(parents=True, exist_ok=True)


def safe_extract(tar, path):
    path = path.resolve()
    for member in tar.getmembers():
        target = (path / member.name).resolve()
        if path not in target.parents and target != path:
            raise RuntimeError(f"Unsafe path in tar: {member.name}")
    tar.extractall(path)


for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
    print(f"Extracting {tar_path}")
    with tarfile.open(tar_path, "r") as tar:
        safe_extract(tar, restore_dir)

for p in repo_dir.glob("*.json"):
    shutil.copy2(p, restore_dir / p.name)

for split in splits:
    src_split_dir = repo_dir / split
    dst_split_dir = restore_dir / split
    dst_split_dir.mkdir(parents=True, exist_ok=True)
    for p in src_split_dir.glob("*.json"):
        shutil.copy2(p, dst_split_dir / p.name)

print(f"Restored dataset to: {restore_dir}")

Read Images Directly From Tar Shards

from pathlib import Path
import tarfile


repo_dir = Path("./KubriCount")

for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
    with tarfile.open(tar_path, "r") as tar:
        for member in tar:
            if member.isfile() and member.name.endswith(".png"):
                data = tar.extractfile(member).read()
                print(member.name, len(data))
                break

Citation

If you find this dataset useful, please cite:

@article{liu2026count,
  title={Count Anything at Any Granularity},
  author={Liu, Chang and Wu, Haoning and Xie, Weidi},
  journal={arXiv preprint arXiv:2605.10887},
  year={2026}
}

Acknowledgements

KubriCount builds on the Kubric data generation framework.

Contact

For questions, please contact liuchang666@sjtu.edu.cn.