The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ParserError
Message: Error tokenizing data. C error: Expected 1 fields in line 10, saw 2
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 246, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4196, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
for key, pa_table in ex_iterable.iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 198, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
return self.get_chunk()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
return self.read(nrows=size)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 10, saw 2Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CalArena — Calibration Benchmark Dataset
CalArena is a large-scale benchmark for evaluating post-hoc calibration methods on classification models. It covers 7 benchmarks across tabular and computer vision domains, spanning hundreds of (dataset, model) pairs and three problem types (binary, multiclass and large scale multiclass).
Each entry in the benchmark is a (p_cal, y_cal, p_test, y_test) tuple — the calibration split and test split of predicted probabilities and ground-truth labels for one (dataset, model) pair.
Calibration methods are fitted on the calibration split and evaluated on the test split.
This dataset is the data companion to the CalArena code repository.
Files
| File | Description | Size |
|---|---|---|
Licenses.zip |
License files for each data source used to create the benchmarks | < 1 MB |
tabrepo-binary.h5 |
Binary classification, classical tabular models | ~36 MB |
tabrepo-binary-experiments.csv |
Experiment index for tabrepo-binary |
< 1 MB |
tabarena-binary.h5 |
Binary classification, modern tabular foundation models | ~26 MB |
tabarena-binary-experiments.csv |
Experiment index for tabarena-binary |
< 1 MB |
cv-binary.h5 |
Binary classification, computer vision models | < 1 MB |
cv-binary-experiments.csv |
Experiment index for cv-binary |
< 1 MB |
tabrepo-multiclass.h5 |
Multiclass classification, classical tabular models | ~115 MB |
tabrepo-multiclass-experiments.csv |
Experiment index for tabrepo-multiclass |
< 1 MB |
tabarena-multiclass.h5 |
Multiclass classification, modern tabular foundation models | ~11 MB |
tabarena-multiclass-experiments.csv |
Experiment index for tabarena-multiclass |
< 1 MB |
cv-multiclass.h5 |
Multiclass classification, computer vision models | ~39 MB |
cv-multiclass-experiments.csv |
Experiment index for cv-multiclass |
< 1 MB |
imagenet-multiclass.h5 |
1000-class ImageNet, computer vision models | ~1.5 GB |
imagenet-multiclass-experiments.csv |
Experiment index for imagenet-multiclass |
< 1 MB |
Benchmark overview
| Benchmark | Problem type | Base models | # Datasets | # Experiments |
|---|---|---|---|---|
tabrepo-binary |
Binary | 8 | 104 tabular datasets | 832 |
tabarena-binary |
Binary | 11 | 30 tabular datasets | 314 |
cv-binary |
Binary | 9 | 3 (CIFAR-10†, Breast, Pneumonia) | 13 |
tabrepo-multiclass |
Multiclass | 8 | 65 tabular datasets | 520 |
tabarena-multiclass |
Multiclass | 11 | 8 tabular datasets | 84 |
cv-multiclass |
Multiclass | 10 | 6 (CIFAR-10, CIFAR-100, Birds, SVHN, Derma, OCT) | 20 |
imagenet-multiclass |
Large scale multiclass | 8 | 1 (ImageNet) | 8 |
† CIFAR-10 is converted to binary (Animal vs Machine) by marginalising over class groups.
Base models
TabRepo (classical tabular): CatBoost, ExtraTrees, LightGBM, LinearModel, NeuralNetFastAI, NeuralNetTorch, RandomForest, XGBoost. Source: TabRepo repository D244_F3_C1530_200.
Best hyperparameter configuration selected per (dataset, model, fold) by validation error.
TabArena (modern tabular): TabPFN-v2.6, TabICLv2, RealTabPFN-v2.5, TabICL_GPU, LimiX_GPU, TabM_GPU, RealMLP_GPU, BetaTabPFN_GPU, ModernNCA_GPU, Mitra_GPU, TabDPT_GPU. Models selected with ≥ 1300 ELO on the TabArena leaderboard (Classification, All Datasets, as of April 1 2026). Source: TabArena.
Computer vision: ResNet, DenseNet, WideResNet, ViT, BEiT, ConvNeXt, Swin, EVA, and others depending on the dataset. Logits sourced from two collections: NN_calibration and Beyond Overconfidence.
Data format
HDF5 files
Each .h5 file has the following structure:
{dataset}/
{model}/
probas_cal float32 (n_cal,) # positive-class probabilities [binary]
float32 (n_cal, n_classes) # class probabilities [multiclass]
labels_cal int32 (n_cal,)
probas_test float32 (n_test,) # same shape conventions as above
labels_test int32 (n_test,)
File-level attributes:
source—"tabrepo","tabarena","cv", or"imagenet"problem_type—"binary"or"multiclass"
All probabilities are valid (non-negative, sum to 1 for multiclass). Labels are 0-indexed integers.
Experiment CSV files
Each {benchmark}-experiments.csv lists one row per (dataset, model) pair:
| Column | Description |
|---|---|
dataset |
Dataset name (matches the HDF5 group key) |
model |
Model name (matches the HDF5 group key) |
cal_size |
Number of calibration samples |
test_size |
Number of test samples |
n_classes |
Number of classes (multiclass benchmarks only) |
tabrepo_fold / tabarena_fold |
Fold index used (TabRepo/TabArena benchmarks) |
tabrepo_config / tabarena_config |
Best hyperparameter configuration selected (TabRepo/TabArena) |
Loading the data
Python (h5py)
import h5py
import numpy as np
with h5py.File("tabrepo-binary.h5", "r") as f:
# List all (dataset, model) pairs
pairs = [(ds, mdl) for ds in f for mdl in f[ds]]
# Load a single experiment
grp = f["anneal/CatBoost"]
p_cal = grp["probas_cal"][:] # shape (n_cal,)
y_cal = grp["labels_cal"][:] # shape (n_cal,)
p_test = grp["probas_test"][:] # shape (n_test,)
y_test = grp["labels_test"][:] # shape (n_test,)
With the CalArena runner
The CalArena repository provides run_benchmark.py, which loads these files automatically and runs all calibrators:
# Place .h5 and .csv files under calibration_benchmarks/
python run_benchmark.py --benchmark tabrepo-binary
Dataset construction
Scripts that were used to generate the benchmarks files can be found in the CalArena repository.
Calibration / test split
For TabRepo and TabArena, the calibration split corresponds to the validation fold of the respective repository, and the test split is the held-out test set. This ensures no data leakage: the base model never sees the calibration set during training.
For computer vision datasets, the calibration and test splits are fixed partitions provided by the original data sources.
Excluded datasets
The following datasets were excluded due to errors in the upstream repositories:
- TabRepo binary: MiniBooNE
- TabRepo multiclass: jannis, kropt, shuttle
Intended use
This dataset is intended for:
- Benchmarking post-hoc calibration algorithms on diverse classification tasks
- Studying the relationship between model type, dataset characteristics, and calibration difficulty
- Developing new calibration methods with access to pre-computed probability estimates
License
The benchmark data is released under CC BY 4.0. Downstream sources of model predictions retain their original licenses; please consult the respective sources before redistribution:
We warmly thank the authors of the original papers for letting us republish their model predictions here.
Citation
@inproceedings{calarena2025,
title = {CalArena: A Large-Scale Benchmark for Post-Hoc Calibration},
author = {...},
booktitle = {...},
year = {2025},
}
- Downloads last month
- 72