super-anonymous-researcher commited on
Commit
36c4d5b
·
verified ·
1 Parent(s): fbfcf24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -3
README.md CHANGED
@@ -1,3 +1,188 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ tags:
4
+ - calibration
5
+ - post-hoc calibration
6
+ - uncertainty quantification
7
+ - benchmark
8
+ - tabular
9
+ - computer vision
10
+ size_categories:
11
+ - 1M<n<10M
12
+ ---
13
+
14
+ # CalArena — Calibration Benchmark Dataset
15
+
16
+ CalArena is a large-scale benchmark for evaluating post-hoc calibration methods on classification models.
17
+ It covers **7 benchmarks** across tabular and computer vision domains, spanning hundreds of (dataset, model) pairs and three problem types (binary, multiclass and large scale multiclass).
18
+
19
+ Each entry in the benchmark is a `(p_cal, y_cal, p_test, y_test)` tuple — the calibration split and test split of predicted probabilities and ground-truth labels for one (dataset, model) pair.
20
+ Calibration methods are fitted on the calibration split and evaluated on the test split.
21
+
22
+ This dataset is the data companion to the [CalArena code repository](https://github.com/super-anonymous-researcher/CalArena).
23
+
24
+ ---
25
+
26
+ ## Files
27
+
28
+ | File | Description | Size |
29
+ |---|---|---|
30
+ | `tabrepo-binary.h5` | Binary classification, classical tabular models | ~36 MB |
31
+ | `tabrepo-binary-experiments.csv` | Experiment index for `tabrepo-binary` | < 1 MB |
32
+ | `tabarena-binary.h5` | Binary classification, modern tabular foundation models | ~26 MB |
33
+ | `tabarena-binary-experiments.csv` | Experiment index for `tabarena-binary` | < 1 MB |
34
+ | `cv-binary.h5` | Binary classification, computer vision models | < 1 MB |
35
+ | `cv-binary-experiments.csv` | Experiment index for `cv-binary` | < 1 MB |
36
+ | `tabrepo-multiclass.h5` | Multiclass classification, classical tabular models | ~115 MB |
37
+ | `tabrepo-multiclass-experiments.csv` | Experiment index for `tabrepo-multiclass` | < 1 MB |
38
+ | `tabarena-multiclass.h5` | Multiclass classification, modern tabular foundation models | ~11 MB |
39
+ | `tabarena-multiclass-experiments.csv` | Experiment index for `tabarena-multiclass` | < 1 MB |
40
+ | `cv-multiclass.h5` | Multiclass classification, computer vision models | ~39 MB |
41
+ | `cv-multiclass-experiments.csv` | Experiment index for `cv-multiclass` | < 1 MB |
42
+ | `imagenet-multiclass.h5` | 1000-class ImageNet, computer vision models | ~1.5 GB |
43
+ | `imagenet-multiclass-experiments.csv` | Experiment index for `imagenet-multiclass` | < 1 MB |
44
+
45
+ ---
46
+
47
+ ## Benchmark overview
48
+
49
+ | Benchmark | Problem type | Base models | # Datasets | # Experiments |
50
+ |---|---|---|---|---|
51
+ | `tabrepo-binary` | Binary | 8 | 104 tabular datasets | 832 |
52
+ | `tabarena-binary` | Binary | 11 | 30 tabular datasets | 314 |
53
+ | `cv-binary` | Binary | 9 | 3 (CIFAR-10†, Breast, Pneumonia) | 13 |
54
+ | `tabrepo-multiclass` | Multiclass | 8 | 65 tabular datasets | 520 |
55
+ | `tabarena-multiclass` | Multiclass | 11 | 8 tabular datasets | 84 |
56
+ | `cv-multiclass` | Multiclass | 10 | 6 (CIFAR-10, CIFAR-100, Birds, SVHN, Derma, OCT) | 20 |
57
+ | `imagenet-multiclass` | Large scale multiclass | 8 | 1 (ImageNet) | 8 |
58
+
59
+ † CIFAR-10 is converted to binary (Animal vs Machine) by marginalising over class groups.
60
+
61
+ ### Base models
62
+
63
+ **TabRepo** (classical tabular): CatBoost, ExtraTrees, LightGBM, LinearModel, NeuralNetFastAI, NeuralNetTorch, RandomForest, XGBoost. Source: [TabRepo](https://github.com/autogluon/tabrepo) repository `D244_F3_C1530_200`.
64
+ Best hyperparameter configuration selected per (dataset, model, fold) by validation error.
65
+
66
+ **TabArena** (modern tabular): TabPFN-v2.6, TabICLv2, RealTabPFN-v2.5, TabICL\_GPU, LimiX\_GPU, TabM\_GPU, RealMLP\_GPU, BetaTabPFN\_GPU, ModernNCA\_GPU, Mitra\_GPU, TabDPT\_GPU.
67
+ Models selected with ≥ 1300 ELO on the [TabArena leaderboard](https://huggingface.co/spaces/TabArena/leaderboard) (Classification, All Datasets, as of April 1 2026).
68
+ Source: [TabArena](https://github.com/autogluon/tabarena).
69
+
70
+ **Computer vision**: ResNet, DenseNet, WideResNet, ViT, BEiT, ConvNeXt, Swin, EVA, and others depending on the dataset.
71
+ Logits sourced from two collections: [NN_calibration](https://github.com/markus93/NN_calibration/tree/master/logits) and [Beyond Overconfidence](https://zenodo.org/records/15229730).
72
+
73
+ ---
74
+
75
+ ## Data format
76
+
77
+ ### HDF5 files
78
+
79
+ Each `.h5` file has the following structure:
80
+
81
+ ```
82
+ {dataset}/
83
+ {model}/
84
+ probas_cal float32 (n_cal,) # positive-class probabilities [binary]
85
+ float32 (n_cal, n_classes) # class probabilities [multiclass]
86
+ labels_cal int32 (n_cal,)
87
+ probas_test float32 (n_test,) # same shape conventions as above
88
+ labels_test int32 (n_test,)
89
+ ```
90
+
91
+ File-level attributes:
92
+ - `source` — `"tabrepo"`, `"tabarena"`, `"cv"`, or `"imagenet"`
93
+ - `problem_type` — `"binary"` or `"multiclass"`
94
+
95
+ All probabilities are valid (non-negative, sum to 1 for multiclass). Labels are 0-indexed integers.
96
+
97
+ ### Experiment CSV files
98
+
99
+ Each `{benchmark}-experiments.csv` lists one row per (dataset, model) pair:
100
+
101
+ | Column | Description |
102
+ |---|---|
103
+ | `dataset` | Dataset name (matches the HDF5 group key) |
104
+ | `model` | Model name (matches the HDF5 group key) |
105
+ | `cal_size` | Number of calibration samples |
106
+ | `test_size` | Number of test samples |
107
+ | `n_classes` | Number of classes (multiclass benchmarks only) |
108
+ | `tabrepo_fold` / `tabarena_fold` | Fold index used (TabRepo/TabArena benchmarks) |
109
+ | `tabrepo_config` / `tabarena_config` | Best hyperparameter configuration selected (TabRepo/TabArena) |
110
+
111
+ ---
112
+
113
+ ## Loading the data
114
+
115
+ ### Python (h5py)
116
+
117
+ ```python
118
+ import h5py
119
+ import numpy as np
120
+
121
+ with h5py.File("tabrepo-binary.h5", "r") as f:
122
+ # List all (dataset, model) pairs
123
+ pairs = [(ds, mdl) for ds in f for mdl in f[ds]]
124
+
125
+ # Load a single experiment
126
+ grp = f["anneal/CatBoost"]
127
+ p_cal = grp["probas_cal"][:] # shape (n_cal,)
128
+ y_cal = grp["labels_cal"][:] # shape (n_cal,)
129
+ p_test = grp["probas_test"][:] # shape (n_test,)
130
+ y_test = grp["labels_test"][:] # shape (n_test,)
131
+ ```
132
+
133
+ ### With the CalArena runner
134
+
135
+ The [CalArena repository](https://github.com/super-anonymous-researcher/CalArena) provides `run_benchmark.py`, which loads these files automatically and runs all calibrators:
136
+
137
+ ```bash
138
+ # Place .h5 and .csv files under calibration_benchmarks/
139
+ python run_benchmark.py --benchmark tabrepo-binary
140
+ ```
141
+
142
+ ---
143
+
144
+ ## Dataset construction
145
+
146
+ Scripts that were used to generate the benchmarks files can be found in the [CalArena repository](https://github.com/super-anonymous-researcher/CalArena).
147
+
148
+ ### Calibration / test split
149
+
150
+ For TabRepo and TabArena, the calibration split corresponds to the **validation fold** of the respective repository, and the test split is the **held-out test set**.
151
+ This ensures no data leakage: the base model never sees the calibration set during training.
152
+
153
+ For computer vision datasets, the calibration and test splits are fixed partitions provided by the original data sources.
154
+
155
+ ### Excluded datasets
156
+
157
+ The following datasets were excluded due to errors in the upstream repositories:
158
+
159
+ - **TabRepo binary**: MiniBooNE
160
+ - **TabRepo multiclass**: jannis, kropt, shuttle
161
+
162
+ ---
163
+
164
+ ## Intended use
165
+
166
+ This dataset is intended for:
167
+ - Benchmarking post-hoc calibration algorithms on diverse classification tasks
168
+ - Studying the relationship between model type, dataset characteristics, and calibration difficulty
169
+ - Developing new calibration methods with access to pre-computed probability estimates
170
+
171
+ ---
172
+
173
+ ## License
174
+
175
+ The benchmark data is released under **CC BY 4.0**. Downstream datasets (OpenML, CIFAR, ImageNet, etc.) retain their original licenses; please consult the respective sources before redistribution.
176
+
177
+ ---
178
+
179
+ ## Citation
180
+
181
+ ```bibtex
182
+ @inproceedings{calarena2025,
183
+ title = {CalArena: A Large-Scale Benchmark for Post-Hoc Calibration},
184
+ author = {...},
185
+ booktitle = {...},
186
+ year = {2025},
187
+ }
188
+ ```