CalArena / README.md
super-anonymous-researcher's picture
Update README.md
6740f74 verified
metadata
license: cc-by-4.0
tags:
  - calibration
  - post-hoc calibration
  - uncertainty quantification
  - benchmark
  - tabular
  - computer vision
size_categories:
  - 1M<n<10M

CalArena — Calibration Benchmark Dataset

CalArena is a large-scale benchmark for evaluating post-hoc calibration methods on classification models. It covers 7 benchmarks across tabular and computer vision domains, spanning hundreds of (dataset, model) pairs and three problem types (binary, multiclass and large scale multiclass).

Each entry in the benchmark is a (p_cal, y_cal, p_test, y_test) tuple — the calibration split and test split of predicted probabilities and ground-truth labels for one (dataset, model) pair. Calibration methods are fitted on the calibration split and evaluated on the test split.

This dataset is the data companion to the CalArena code repository.


Files

File Description Size
Licenses.zip License files for each data source used to create the benchmarks < 1 MB
tabrepo-binary.h5 Binary classification, classical tabular models ~36 MB
tabrepo-binary-experiments.csv Experiment index for tabrepo-binary < 1 MB
tabarena-binary.h5 Binary classification, modern tabular foundation models ~26 MB
tabarena-binary-experiments.csv Experiment index for tabarena-binary < 1 MB
cv-binary.h5 Binary classification, computer vision models < 1 MB
cv-binary-experiments.csv Experiment index for cv-binary < 1 MB
tabrepo-multiclass.h5 Multiclass classification, classical tabular models ~115 MB
tabrepo-multiclass-experiments.csv Experiment index for tabrepo-multiclass < 1 MB
tabarena-multiclass.h5 Multiclass classification, modern tabular foundation models ~11 MB
tabarena-multiclass-experiments.csv Experiment index for tabarena-multiclass < 1 MB
cv-multiclass.h5 Multiclass classification, computer vision models ~39 MB
cv-multiclass-experiments.csv Experiment index for cv-multiclass < 1 MB
imagenet-multiclass.h5 1000-class ImageNet, computer vision models ~1.5 GB
imagenet-multiclass-experiments.csv Experiment index for imagenet-multiclass < 1 MB

Benchmark overview

Benchmark Problem type Base models # Datasets # Experiments
tabrepo-binary Binary 8 104 tabular datasets 832
tabarena-binary Binary 11 30 tabular datasets 314
cv-binary Binary 9 3 (CIFAR-10†, Breast, Pneumonia) 13
tabrepo-multiclass Multiclass 8 65 tabular datasets 520
tabarena-multiclass Multiclass 11 8 tabular datasets 84
cv-multiclass Multiclass 10 6 (CIFAR-10, CIFAR-100, Birds, SVHN, Derma, OCT) 20
imagenet-multiclass Large scale multiclass 8 1 (ImageNet) 8

† CIFAR-10 is converted to binary (Animal vs Machine) by marginalising over class groups.

Base models

TabRepo (classical tabular): CatBoost, ExtraTrees, LightGBM, LinearModel, NeuralNetFastAI, NeuralNetTorch, RandomForest, XGBoost. Source: TabRepo repository D244_F3_C1530_200. Best hyperparameter configuration selected per (dataset, model, fold) by validation error.

TabArena (modern tabular): TabPFN-v2.6, TabICLv2, RealTabPFN-v2.5, TabICL_GPU, LimiX_GPU, TabM_GPU, RealMLP_GPU, BetaTabPFN_GPU, ModernNCA_GPU, Mitra_GPU, TabDPT_GPU. Models selected with ≥ 1300 ELO on the TabArena leaderboard (Classification, All Datasets, as of April 1 2026). Source: TabArena.

Computer vision: ResNet, DenseNet, WideResNet, ViT, BEiT, ConvNeXt, Swin, EVA, and others depending on the dataset. Logits sourced from two collections: NN_calibration and Beyond Overconfidence.


Data format

HDF5 files

Each .h5 file has the following structure:

{dataset}/
  {model}/
    probas_cal   float32  (n_cal,)           # positive-class probabilities [binary]
                 float32  (n_cal, n_classes) # class probabilities [multiclass]
    labels_cal   int32    (n_cal,)
    probas_test  float32  (n_test,)          # same shape conventions as above
    labels_test  int32    (n_test,)

File-level attributes:

  • source"tabrepo", "tabarena", "cv", or "imagenet"
  • problem_type"binary" or "multiclass"

All probabilities are valid (non-negative, sum to 1 for multiclass). Labels are 0-indexed integers.

Experiment CSV files

Each {benchmark}-experiments.csv lists one row per (dataset, model) pair:

Column Description
dataset Dataset name (matches the HDF5 group key)
model Model name (matches the HDF5 group key)
cal_size Number of calibration samples
test_size Number of test samples
n_classes Number of classes (multiclass benchmarks only)
tabrepo_fold / tabarena_fold Fold index used (TabRepo/TabArena benchmarks)
tabrepo_config / tabarena_config Best hyperparameter configuration selected (TabRepo/TabArena)

Loading the data

Python (h5py)

import h5py
import numpy as np

with h5py.File("tabrepo-binary.h5", "r") as f:
    # List all (dataset, model) pairs
    pairs = [(ds, mdl) for ds in f for mdl in f[ds]]

    # Load a single experiment
    grp = f["anneal/CatBoost"]
    p_cal   = grp["probas_cal"][:]   # shape (n_cal,)
    y_cal   = grp["labels_cal"][:]   # shape (n_cal,)
    p_test  = grp["probas_test"][:]  # shape (n_test,)
    y_test  = grp["labels_test"][:]  # shape (n_test,)

With the CalArena runner

The CalArena repository provides run_benchmark.py, which loads these files automatically and runs all calibrators:

# Place .h5 and .csv files under calibration_benchmarks/
python run_benchmark.py --benchmark tabrepo-binary

Dataset construction

Scripts that were used to generate the benchmarks files can be found in the CalArena repository.

Calibration / test split

For TabRepo and TabArena, the calibration split corresponds to the validation fold of the respective repository, and the test split is the held-out test set. This ensures no data leakage: the base model never sees the calibration set during training.

For computer vision datasets, the calibration and test splits are fixed partitions provided by the original data sources.

Excluded datasets

The following datasets were excluded due to errors in the upstream repositories:

  • TabRepo binary: MiniBooNE
  • TabRepo multiclass: jannis, kropt, shuttle

Intended use

This dataset is intended for:

  • Benchmarking post-hoc calibration algorithms on diverse classification tasks
  • Studying the relationship between model type, dataset characteristics, and calibration difficulty
  • Developing new calibration methods with access to pre-computed probability estimates

License

The benchmark data is released under CC BY 4.0. Downstream sources of model predictions retain their original licenses; please consult the respective sources before redistribution:

We warmly thank the authors of the original papers for letting us republish their model predictions here.


Citation

@inproceedings{calarena2025,
  title     = {CalArena: A Large-Scale Benchmark for Post-Hoc Calibration},
  author    = {...},
  booktitle = {...},
  year      = {2025},
}