File size: 8,071 Bytes
36c4d5b a369f7e 36c4d5b 6740f74 36c4d5b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | ---
license: cc-by-4.0
tags:
- calibration
- post-hoc calibration
- uncertainty quantification
- benchmark
- tabular
- computer vision
size_categories:
- 1M<n<10M
---
# CalArena — Calibration Benchmark Dataset
CalArena is a large-scale benchmark for evaluating post-hoc calibration methods on classification models.
It covers **7 benchmarks** across tabular and computer vision domains, spanning hundreds of (dataset, model) pairs and three problem types (binary, multiclass and large scale multiclass).
Each entry in the benchmark is a `(p_cal, y_cal, p_test, y_test)` tuple — the calibration split and test split of predicted probabilities and ground-truth labels for one (dataset, model) pair.
Calibration methods are fitted on the calibration split and evaluated on the test split.
This dataset is the data companion to the [CalArena code repository](https://github.com/super-anonymous-researcher/CalArena).
---
## Files
| File | Description | Size |
|---|---|---|
| `Licenses.zip` | License files for each data source used to create the benchmarks | < 1 MB |
| `tabrepo-binary.h5` | Binary classification, classical tabular models | ~36 MB |
| `tabrepo-binary-experiments.csv` | Experiment index for `tabrepo-binary` | < 1 MB |
| `tabarena-binary.h5` | Binary classification, modern tabular foundation models | ~26 MB |
| `tabarena-binary-experiments.csv` | Experiment index for `tabarena-binary` | < 1 MB |
| `cv-binary.h5` | Binary classification, computer vision models | < 1 MB |
| `cv-binary-experiments.csv` | Experiment index for `cv-binary` | < 1 MB |
| `tabrepo-multiclass.h5` | Multiclass classification, classical tabular models | ~115 MB |
| `tabrepo-multiclass-experiments.csv` | Experiment index for `tabrepo-multiclass` | < 1 MB |
| `tabarena-multiclass.h5` | Multiclass classification, modern tabular foundation models | ~11 MB |
| `tabarena-multiclass-experiments.csv` | Experiment index for `tabarena-multiclass` | < 1 MB |
| `cv-multiclass.h5` | Multiclass classification, computer vision models | ~39 MB |
| `cv-multiclass-experiments.csv` | Experiment index for `cv-multiclass` | < 1 MB |
| `imagenet-multiclass.h5` | 1000-class ImageNet, computer vision models | ~1.5 GB |
| `imagenet-multiclass-experiments.csv` | Experiment index for `imagenet-multiclass` | < 1 MB |
---
## Benchmark overview
| Benchmark | Problem type | Base models | # Datasets | # Experiments |
|---|---|---|---|---|
| `tabrepo-binary` | Binary | 8 | 104 tabular datasets | 832 |
| `tabarena-binary` | Binary | 11 | 30 tabular datasets | 314 |
| `cv-binary` | Binary | 9 | 3 (CIFAR-10†, Breast, Pneumonia) | 13 |
| `tabrepo-multiclass` | Multiclass | 8 | 65 tabular datasets | 520 |
| `tabarena-multiclass` | Multiclass | 11 | 8 tabular datasets | 84 |
| `cv-multiclass` | Multiclass | 10 | 6 (CIFAR-10, CIFAR-100, Birds, SVHN, Derma, OCT) | 20 |
| `imagenet-multiclass` | Large scale multiclass | 8 | 1 (ImageNet) | 8 |
† CIFAR-10 is converted to binary (Animal vs Machine) by marginalising over class groups.
### Base models
**TabRepo** (classical tabular): CatBoost, ExtraTrees, LightGBM, LinearModel, NeuralNetFastAI, NeuralNetTorch, RandomForest, XGBoost. Source: [TabRepo](https://github.com/autogluon/tabrepo) repository `D244_F3_C1530_200`.
Best hyperparameter configuration selected per (dataset, model, fold) by validation error.
**TabArena** (modern tabular): TabPFN-v2.6, TabICLv2, RealTabPFN-v2.5, TabICL\_GPU, LimiX\_GPU, TabM\_GPU, RealMLP\_GPU, BetaTabPFN\_GPU, ModernNCA\_GPU, Mitra\_GPU, TabDPT\_GPU.
Models selected with ≥ 1300 ELO on the [TabArena leaderboard](https://huggingface.co/spaces/TabArena/leaderboard) (Classification, All Datasets, as of April 1 2026).
Source: [TabArena](https://github.com/autogluon/tabarena).
**Computer vision**: ResNet, DenseNet, WideResNet, ViT, BEiT, ConvNeXt, Swin, EVA, and others depending on the dataset.
Logits sourced from two collections: [NN_calibration](https://github.com/markus93/NN_calibration/tree/master/logits) and [Beyond Overconfidence](https://zenodo.org/records/15229730).
---
## Data format
### HDF5 files
Each `.h5` file has the following structure:
```
{dataset}/
{model}/
probas_cal float32 (n_cal,) # positive-class probabilities [binary]
float32 (n_cal, n_classes) # class probabilities [multiclass]
labels_cal int32 (n_cal,)
probas_test float32 (n_test,) # same shape conventions as above
labels_test int32 (n_test,)
```
File-level attributes:
- `source` — `"tabrepo"`, `"tabarena"`, `"cv"`, or `"imagenet"`
- `problem_type` — `"binary"` or `"multiclass"`
All probabilities are valid (non-negative, sum to 1 for multiclass). Labels are 0-indexed integers.
### Experiment CSV files
Each `{benchmark}-experiments.csv` lists one row per (dataset, model) pair:
| Column | Description |
|---|---|
| `dataset` | Dataset name (matches the HDF5 group key) |
| `model` | Model name (matches the HDF5 group key) |
| `cal_size` | Number of calibration samples |
| `test_size` | Number of test samples |
| `n_classes` | Number of classes (multiclass benchmarks only) |
| `tabrepo_fold` / `tabarena_fold` | Fold index used (TabRepo/TabArena benchmarks) |
| `tabrepo_config` / `tabarena_config` | Best hyperparameter configuration selected (TabRepo/TabArena) |
---
## Loading the data
### Python (h5py)
```python
import h5py
import numpy as np
with h5py.File("tabrepo-binary.h5", "r") as f:
# List all (dataset, model) pairs
pairs = [(ds, mdl) for ds in f for mdl in f[ds]]
# Load a single experiment
grp = f["anneal/CatBoost"]
p_cal = grp["probas_cal"][:] # shape (n_cal,)
y_cal = grp["labels_cal"][:] # shape (n_cal,)
p_test = grp["probas_test"][:] # shape (n_test,)
y_test = grp["labels_test"][:] # shape (n_test,)
```
### With the CalArena runner
The [CalArena repository](https://github.com/super-anonymous-researcher/CalArena) provides `run_benchmark.py`, which loads these files automatically and runs all calibrators:
```bash
# Place .h5 and .csv files under calibration_benchmarks/
python run_benchmark.py --benchmark tabrepo-binary
```
---
## Dataset construction
Scripts that were used to generate the benchmarks files can be found in the [CalArena repository](https://github.com/super-anonymous-researcher/CalArena).
### Calibration / test split
For TabRepo and TabArena, the calibration split corresponds to the **validation fold** of the respective repository, and the test split is the **held-out test set**.
This ensures no data leakage: the base model never sees the calibration set during training.
For computer vision datasets, the calibration and test splits are fixed partitions provided by the original data sources.
### Excluded datasets
The following datasets were excluded due to errors in the upstream repositories:
- **TabRepo binary**: MiniBooNE
- **TabRepo multiclass**: jannis, kropt, shuttle
---
## Intended use
This dataset is intended for:
- Benchmarking post-hoc calibration algorithms on diverse classification tasks
- Studying the relationship between model type, dataset characteristics, and calibration difficulty
- Developing new calibration methods with access to pre-computed probability estimates
---
## License
The benchmark data is released under **CC BY 4.0**.
Downstream sources of model predictions retain their original licenses; please consult the respective sources before redistribution:
- [TabArena](https://github.com/autogluon/tabarena)
- [TabRepo](https://github.com/autogluon/tabarena/blob/main/tabrepo.md)
- [NN_calibration](https://github.com/markus93/NN_calibration)
- [Data for: "Beyond Overconfidence..."](https://zenodo.org/records/15229730)
We warmly thank the authors of the original papers for letting us republish their model predictions here.
---
## Citation
```bibtex
@inproceedings{calarena2025,
title = {CalArena: A Large-Scale Benchmark for Post-Hoc Calibration},
author = {...},
booktitle = {...},
year = {2025},
}
```
|