File size: 7,474 Bytes
9d05f93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
400f78a
9d05f93
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
language:
  - en
  - ja
license: cc-by-nc-4.0
tags:
  - image
  - text
  - synthetic
  - compositional-generalization
  - vision-language
  - kamon
pretty_name: KamonBench
size_categories:
  - 10K<n<100K
task_categories:
  - image-to-text
viewer: false
---

# KamonBench

A grammar-based image-to-structure benchmark for evaluating compositional
factor recovery in vision-language models, built around Japanese family crests
(_kamon_, 家紋).

Each composite crest is paired with:

- a formal kamon description language string (KDL, _kamon yōgo_, 家紋用語),
- a segmented Japanese analysis,
- an English translation,
- a non-linguistic program code over the generator factors.

Because every crest is synthesized from a known triple of generator factors
(container `C`, modifier `R`, motif `M`), KamonBench supports direct factor
metrics, controlled factor-pair recombination splits, counterfactual motif-
sensitivity tests under fixed (container, modifier) contexts, and linear
probes of factor accessibility from frozen representations. See the
accompanying paper for details and baselines.

The companion code (package, training and evaluation pipelines, and the
generator) lives at
[`SakanaAI/KamonBench`](https://github.com/SakanaAI/KamonBench).

## Quick start

```python
from huggingface_hub import snapshot_download

local_dir = snapshot_download(
    repo_id="SakanaAI/KamonBench",
    repo_type="dataset",
)
# Then unpack kamon_bench.zip into a `dataset01/` directory; each Croissant
# file is paired with the archive (via SHA-256) and references images at
# dataset01/*.png.
```

## Files

| File                                       | Size   | Purpose                                                    |
| ------------------------------------------ | ------ | ---------------------------------------------------------- |
| `kamon_bench.zip`                          | 520 MB | Full PNG image set (54,116 PNGs under `dataset01/`)        |
| `kamon_croissant.json`                     | 34 MB  | Main Croissant 1.0 + RAI metadata, with the standard split |
| `kamon_croissant_program_cm_holdout.json`  | 22 MB  | Croissant variant: held-out (C, M) pairs                   |
| `kamon_croissant_program_rm_holdout.json`  | 22 MB  | Croissant variant: held-out (R, M) pairs                   |
| `kamon_croissant_program_crm_holdout.json` | 22 MB  | Croissant variant: held-out (C, R, M) triples              |
| `LICENSE.txt`                              | —      | CC BY-NC 4.0 license text                                  |
| `README.md`                                | —      | This card                                                  |

The Croissant files live next to the archive (not inside it), because each
file pins the archive's SHA-256.

## Dataset structure

The image archive contains 54,116 PNGs under `dataset01/`:

| Slice                 | Count  | Description                                                                                          |
| --------------------- | ------ | ---------------------------------------------------------------------------------------------------- |
| Composite crests      | 20,000 | A container plus motif (with optional modifier), or a containerless spatial arrangement of one motif |
| Base-motif components | 20,000 | One isolated base motif per composite                                                                |
| Container components  | 14,116 | One isolated container per composite that uses one                                                   |

Splits assign whole component groups together with their parent composite, so
component records share the split of the composite they belong to.

| Split | Composites | Components | Total  |
| ----- | ---------- | ---------- | ------ |
| train | 16,000     | 27,280     | 43,280 |
| dev   | 2,000      | 3,405      | 5,405  |
| test  | 2,000      | 3,431      | 5,431  |

Each Croissant record in the `images` record set has these fields:

| Field           | Description                                                    |
| --------------- | -------------------------------------------------------------- |
| `id`            | Unique image identifier                                        |
| `image_path`    | Path to the PNG inside `dataset01/`                            |
| `image`         | The PNG contents (resolved through the Croissant `cr:fileSet`) |
| `description`   | Japanese KDL description                                       |
| `translation`   | English translation                                            |
| `analysis`      | Segmented Japanese analysis (list of `{expr, head}` entries)   |
| `is_composite`  | Whether the record is a composite crest or a component         |
| `component_ids` | For composites, the IDs of the linked component records        |
| `split`         | `"train"`, `"dev"`, or `"test"`                                |

For program-label experiments, the same images are paired with non-linguistic
codes for the container (`C:NNN`), modifier (`X:N`), and motif (`M:NNN`); the
three `*_holdout.json` Croissant variants reassign splits so that whole factor
combinations (`(C, M)`, `(R, M)`, or `(C, R, M)`) are absent from training,
while the underlying primitive tokens still appear individually in training.

## Recombination splits

The three holdout variants share the same images as the main file but reassign
the train/dev/test labels so that every test composite contains a held-out
factor combination not seen during training. Primitive tokens remain
represented in training, so the test isolates the question of whether a model
can bind familiar primitives in novel combinations rather than recall whole
crests.

## Limitations and intended use

- KamonBench is a research benchmark for compositional visual recognition,
  factor-aware evaluation, and representation analysis. It is not an
  authoritative cultural or historical catalogue of _kamon_.
- The crests are synthetically rendered from upstream motif assets; they
  differ in style and polish from professionally rendered crests and do not
  cover the full distribution of historical traditions.
- The released generator uses a limited grammar (one level of containment, a
  fixed set of containers and modifiers).
- See `rai:dataLimitations`, `rai:dataBiases`, and `rai:dataSocialImpact` in
  the Croissant metadata for the formal RAI description.

## License

The dataset is released under
[CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/);
see `LICENSE.txt` for the full legal code. The companion code is released
under the MIT License.

The component images bundled with KamonBench (one isolated motif per
composite and one container per contained composite) are repackaged in PNG
form from the _Rebolforces kamondataset_, a publicly available collection of
Japanese kamon motifs originally scraped from a catalogue website that is no
longer accessible online (preserved via the Internet Archive); upstream
provenance cannot be tracked further. We make no copyright claim over those
source images and release KamonBench solely for non-commercial research use.

## Citation

```bibtex
@misc{kamonbench2026,
  title  = {KamonBench: A Grammar-Based Dataset for Evaluating Compositional Factor Recovery in Vision-Language Models},
  author = {Sproat, Richard and Peluchetti, Stefano},
  year   = {2026},
  url    = {https://arxiv.org/abs/2605.13322},
  note   = {arXiv preprint},
}
```