| --- |
| annotations_creators: |
| - expert-annotated |
| language_creators: |
| - found |
| language: |
| - en |
| license: cc-by-4.0 |
| multilinguality: |
| - monolingual |
| size_categories: |
| - 10K<n<100K |
| source_datasets: |
| - imagenet |
| task_categories: |
| - object-detection |
| - image-classification |
| task_categories:: |
| - multi-label-image-classification |
| - object-detection |
| pretty_name: "ReImageNet" |
| tags: |
| - imagenet |
| - reannotation |
| - multi-label |
| - bounding-box |
| - computer-vision |
| - attributes |
| - localization |
| configs: |
| - config_name: default |
| data_files: |
| - split: test |
| path: reannotation.jsonl |
| |
| extra_gated_prompt: "This dataset is released for research purposes only. By requesting access, you agree to: (1) cite the associated paper in any publication or project that uses this dataset; (2) not redistribute the dataset or its annotations without permission from the authors." |
|
|
| extra_gated_fields: |
| I agree to the terms above: checkbox |
| --- |
| |
|
|
| > [!IMPORTANT] |
| > **Note for reviewers:** This dataset uses Hugging Face gated access |
| > with automatic approval to comply with NeurIPS submission requirements. |
| > Unfortunately, Hugging Face automatically shares your username and email |
| > with the dataset authors upon access request — we cannot disable this. |
| > To preserve anonymity during review, please either create a new |
| > anonymous Hugging Face account to request access, or use the copy of |
| > the dataset included in the supplementary material of our submission, |
| > which requires no authentication. |
| > |
| # ReImageNet |
|
|
| **ReImageNet** is a complete multilabel reannotation with localization of the ImageNet-1K validation |
| set (ILSVRC2012). A team of 7 trained in-house annotators reviewed all |
| 50,000 validation images through an iterative annotation process, producing |
| per-image bounding boxes with class labels and annotation attributes, |
| correcting and extending the original single-label ground truth. Class |
| names and definitions were revised where the original WordNet-based names |
| no longer matched the actual image content. |
|
|
| > **Note on images:** This repository contains **annotations only**. |
| > The images are part of the ImageNet dataset and must be obtained |
| > separately from [image-net.org](https://image-net.org/download). |
| > Match images to annotations using the `file_path` field |
| > (e.g. `n01440764/ILSVRC2012_val_00000293.JPEG`). |
| |
| --- |
| |
| ## Dataset Summary |
| |
| | Property | Value | |
| |---|---| |
| | Base dataset | ImageNet-1K validation set (ILSVRC2012) | |
| | Images | 50,000 | |
| | Bounding boxes | 99,534 | |
| | ImageNet classes | 1,000 | |
| | Labels per image | mean 1.63, median 1 | |
| | Bounding boxes per image | mean 1.99, median 1 | |
| | Single-label images (S) | 62.6% | |
| | Multi-label images (M) | 32.7% | |
| | No valid label images (N) | 4.7% | |
| | Annotators | 7 trained non-domain-experts | |
| | License (annotations) | CC BY 4.0 | |
| |
| --- |
| |
| ## Dataset Structure |
| |
| ### Files |
| |
| | File | Description | |
| |---|---| |
| | `reannotation.jsonl` | Main annotation file — one JSON record per line | |
| | `label_names.json` | List of 1,000 ImageNet synset IDs indexed by class integer (0–999) | |
| | `class_update_config.json` | Configuration file containing equivalent classes (visually |
| indistinguishable ImageNet class pairs treated as interchangeable during |
| evaluation). | |
|
|
| ### Data Fields |
|
|
| Each line in `reannotation.jsonl` is a JSON object with the following fields: |
|
|
| | Field | Type | Description | |
| |---|---|---| |
| | `image_name` | `string` | Filename (e.g. `ILSVRC2012_val_00004410.JPEG`) | |
| | `original_class` | `int[]` | Original ImageNet label(s). Usually a single integer; some images have two labels due to ImageNet equivalent classes | |
| | `reannotated_labels` | `int[]` | All class labels visible in the image, as determined by annotators | |
| | `file_path` | `string` | Relative path within the ImageNet val directory (`{synset}/{image_name}`) | |
| | `bboxes` | `object[]` | List of bounding box annotations (see below) | |
|
|
| All class integers are indices into `label_names.json` (0-indexed). |
|
|
| #### Bounding Box Fields |
|
|
| Each element of `bboxes` is: |
|
|
| | Field | Type | Description | |
| |---|---|---| |
| | `coordinates` | `int[4]` | Bounding box in pixel space: `[x1, y1, x2, y2]` (top-left, bottom-right). Boxes enclose the object with a moderate margin. | |
| | `labels` | `int[]` | Class label(s) for this box (index into `label_names.json`) | |
| | `group` | `int \| null` | Group ID. When multiple bbox entries share the same non-null `group` value, they represent **the same physical object** with multiple labels. Coordinates are identical across grouped entries. | |
| | `crowd_flag` | `bool` | True if the bbox covers five or more instances of the same class collapsed into a single box. Exact instance count is not recorded. | |
| | `reflected_flag` | `bool` | True if the object is a reflection in a mirror or water surface, annotated independently of whether the reflected object itself is also visible. | |
| | `rendition_flag` | `bool` | True if the object is an artificial or stylized representation (toy, drawing, sculpture, logo, etc.) rather than a real instance. | |
| | `ocr_needed_flag` | `bool` | True if correct classification requires reading text visible in the image. | |
| | `dominant_object` | `bool` | True if the object is one a person would notice immediately upon viewing the image. Based on annotator judgment rather than size alone. | |
|
|
|
|
| > **Label interpretation note:** An empty `reannotated_labels` list (`[]`) |
| > indicates the image contains no valid ImageNet class — corresponding to |
| > the `N` category in the paper, where annotators were confident no |
| > ImageNet object is present. A label with value `-1` indicates the annotator |
| > was uncertain about the correct label (marked as *Not Sure* during |
| > annotation); these images are flagged for verification in the ongoing |
| > verififcation phase. |
| |
| ### label_names.json |
|
|
| A JSON array of 1,000 WordNet synset IDs ordered by class index: |
|
|
| ```json |
| ["n01440764", "n01443537", ..., "n15075141"] |
| ``` |
|
|
| `label_names[i]` is the synset for class integer `i`. |
|
|
| --- |
|
|
| ### class_update_config.json |
|
|
| Contains two entries. `eq_classes` lists pairs of ImageNet classes that |
| are visually indistinguishable or semantically equivalent (e.g. |
| `laptop`/`notebook computer`, `bathtub`/`tub`), and are therefore treated |
| as interchangeable during evaluation — a prediction of either class is |
| counted as correct. `metadata` records the creation date and the date of |
| the last update of this list. |
|
|
| --- |
|
|
| ## Evaluation code |
|
|
| Evaluation code is available at |
| [github.com/klarajanouskova/ImageNet](https://github.com/klarajanouskova/ImageNet) |
|
|
| ## Annotation Attributes |
|
|
| Each bounding box can carry one or more attributes that capture visually |
| distinct properties of the depicted object: |
|
|
| - **Rendition** — the object is a toy, drawing, sculpture, or other |
| artificial representation. Tests model robustness to non-real-world |
| depictions. |
| - **Crowd** — five or more instances of the same class, collapsed into a |
| single bounding box. The exact count is not recorded. |
| - **Text-recognition** — correct classification requires reading visible |
| text. Tests whether models can leverage text as a recognition cue. |
| - **Reflection** — the object appears as a reflection in a mirror or |
| water surface, annotated regardless of whether the original object is |
| also visible. |
| - **Dominant** — the object would be immediately noticed upon viewing the |
| image. An image may have any number of dominant objects, or none at all. |
|
|
|
|
| --- |
|
|
| ## Annotation Process |
|
|
| Annotations were produced by a team of 7 non-domain-expert annotators |
| (aged 16–50, spanning diverse backgrounds including geology specialists, |
| canine enthusiasts, and automotive buffs) recruited and trained in-house. |
|
|
| The annotation process was iterative: |
|
|
| 1. **Training**: Annotators studied known ImageNet issues, worked through |
| attribute examples, and practised on intentionally challenging classes. |
| Only annotators passing a quality threshold proceeded to real tasks. |
| 2. **Per-class preparation**: Before labelling any image, annotators |
| examined the actual image content of each class, consulted external |
| references (Wikipedia, iNaturalist), and recorded a working definition |
| in a shared table. |
| 3. **Image labelling**: Annotators drew bounding boxes around all objects |
| a person would notice at first glance, assigned class labels and |
| attributes, assisted by OWLv2 bounding box proposals and OpenCLIP |
| top-20 class predictions. |
| 4. **Verification round**: Annotators revisited already-annotated images |
| using finalised guidelines, with OWLv2/OpenCLIP replaced by MLLM |
| predictions with SAM3-generated localisations as a stronger reference. |
| This process is ongoing. |
|
|
| Quality was continuously monitored via control sets and a supervised |
| communication channel. |
|
|
| --- |
|
|
| ## Considerations for Using the Data |
|
|
| ### Limitations |
|
|
| - All annotators share a European background (Czechia, Ukraine, Greece, |
| Latvia), which may affect interpretation of culturally specific classes. |
| - Fine-grained wildlife classes were annotated by non-experts using |
| online references; species-level distinctions may be imprecise. |
| - The shared class definition table enforces consistency but may propagate |
| errors if a class is defined incorrectly. |
| - Model predictions (anonymised, optional) may have nudged annotators |
| toward certain labels. |
| - This reannotation covers only the validation set. Due to the |
| distribution shift between training and validation sets, revised class |
| names and definitions may not accurately reflect the training set. |
| - The verification round is ongoing; some annotations may still be |
| updated. |
|
|
| ### Social Impact |
|
|
| This dataset extends ImageNet-1k validation labels to support multilabel |
| evaluation and spatial grounding, enabling more accurate measurement of |
| model performance. The attribute annotations allow fine-grained analysis |
| of model capabilities across distinct recognition regimes (text-based, |
| rendition-based, etc.). |
|
|
| --- |
|
|
|
|
| ## Related Work |
|
|
| - Flaws of ImageNet — |
| our prior analysis of ImageNet issues, [arXiv](https://arxiv.org/abs/2412.00076), [ICLR blogpost](https://github.com/klarajanouskova/ImageNet/) |
| - [Multimodal Large Language Models as Image Classifiers](https://arxiv.org/abs/2603.06578) — |
| partial reannotation and MLLM evaluation study |
| - [Aiming for Perfect ImageNet-1K](https://klarajanouskova.github.io/ImageNet/) - project page |
|
|
| --- |
|
|
| ## Dataset Card Contact |
|
|
| [c1rcuslegend](akelloillya@gmail.com) |