Dataset Viewer
Auto-converted to Parquet Duplicate
task_id
stringclasses
53 values
query
stringclasses
53 values
answer
stringlengths
1
139
artifact_type
stringclasses
6 values
artifact_scope
stringclasses
4 values
query_cols
listlengths
1
5
artifact_reasoning_cols
listlengths
0
8
table
dict
num_rows
int64
10
1.13k
num_cols
int64
5
20
recovered_tables_transform_spec
dict
base_data_num_tokens
int64
1.94k
16.1k
base_data_token_bucket
int64
2k
16k
perturbation_note
stringclasses
257 values
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
40.52
inconsistent-commonsense-logic
connected-multi-column
[ "race_year_id", "rank", "age" ]
[ "rank", "time", "time_in_seconds" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance" ], "rows": [ [ "68140", "Millstone 100", "8.0", "VERHEUL Jasper", "30", "26H 35M 25S", "95725.0", "2021...
56
10
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "rank", "new_value": "8.0", "row": 0 }, { "col": "rank", "new_value": "8.0", "row": 41 }, { "col": "rank", "new_value": "6.0", "row": 52 } ...
3,977
4,000
Introduced an inconsistency in the rank column. Can be recovered by looking at time or time_in_seconds column of other rows.
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
38.5
bad-values
connected-multi-column
[ "race_year_id", "rank", "age" ]
[ "rank", "time", "time_in_seconds" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
18
20
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "rank", "new_value": "#NUM!", "row": 6 }, { "col": "rank", "new_value": "#RANK", "row": 16 } ] ] }
1,977
2,000
Introduced bad values in rank column. You can recover the missing values by looking at the other column.
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
41.0
inconsistent-formatting
single-column
[ "race_year_id", "rank", "age" ]
[ "rank" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
37
20
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "age", "new_value": "31 years old", "row": 5 }, { "col": "age", "new_value": "43 years old", "row": 13 }, { "col": "age", "new_value": "40 years old", ...
4,040
4,000
Introduced formatting inconsistencies in rank and age columns
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
40.48
missingness
connected-multi-column
[ "race_year_id", "rank", "age" ]
[ "rank", "time", "time_in_seconds" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
145
20
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "rank", "new_value": null, "row": 9 }, { "col": "rank", "new_value": null, "row": 16 }, { "col": "rank", "new_value": null, "row": 20 }, ...
15,952
16,000
Introduced missingness in rank column. You can recover the missing values by looking at the time or time_in_seconds column.
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
40.13
inconsistent-formatting
single-column
[ "race_year_id", "rank", "age" ]
[ "rank" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance" ], "rows": [ [ "68140", "Millstone 100", "1.0", "VERHEUL Jasper", "30", "26H 35M 25S", "95725.0", "2021...
113
10
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "age", "new_value": "43 years old", "row": 1 }, { "col": "age", "new_value": "31 years old", "row": 5 }, { "col": "rank", "new_value": "RANK: ---5th Place", ...
7,995
8,000
Introduced formatting inconsistencies in rank and age columns
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
41.43
inconsistent-commonsense-logic
connected-multi-column
[ "race_year_id", "rank", "age" ]
[ "rank", "time", "time_in_seconds" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance" ], "rows": [ [ "68140", "Millstone 100", "1.0", "VERHEUL Jasper", "30", "26H 35M 25S", "95725.0", "2021...
225
10
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "rank", "new_value": "9.0", "row": 16 }, { "col": "rank", "new_value": "18.0", "row": 29 }, { "col": "rank", "new_value": "9.0", "row": 35 }, ...
15,991
16,000
Introduced an inconsistency in the rank column. Can be recovered by looking at time or time_in_seconds column of other rows.
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
38.5
clean
clean
[ "race_year_id", "rank", "age" ]
[]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
18
20
{ "drop_rows": [ [] ], "overwrite_cells": [ [] ] }
1,977
2,000
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
40.59
inconsistent-commonsense-logic
connected-multi-column
[ "race_year_id", "rank", "age" ]
[ "rank", "time", "time_in_seconds" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
73
20
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "rank", "new_value": "8.0", "row": 0 }, { "col": "rank", "new_value": "9.0", "row": 35 }, { "col": "rank", "new_value": "12.0", "row": 65 } ...
7,994
8,000
Introduced an inconsistency in the rank column. Can be recovered by looking at time or time_in_seconds column of other rows.
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
40.88
outliers
single-column
[ "race_year_id", "rank", "age" ]
[ "age" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
145
20
{ "drop_rows": [ [ 16, 35 ] ], "overwrite_cells": [ [] ] }
15,952
16,000
Introduced a obvious outliers in age column. Should be removed.
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
41.0
clean
clean
[ "race_year_id", "rank", "age" ]
[]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance", "nationality", "participants", "start_time", "participation", "elevation_gain", "city", "elevation_loss", "country", "aid_...
37
20
{ "drop_rows": [ [] ], "overwrite_cells": [ [] ] }
4,040
4,000
ultra-trail-races-rank
We have a dataset of ultra trail running race results. What is the average age of all top 5 finishers (i.e., any result with a rank of 1-5) in this dataset? Return the result rounded to 2 decimal places.
40.13
bad-values
connected-multi-column
[ "race_year_id", "rank", "age" ]
[ "rank", "time", "time_in_seconds" ]
{ "headers": [ "race_year_id", "race", "rank", "runner", "age", "time", "time_in_seconds", "date", "event", "distance" ], "rows": [ [ "68140", "Millstone 100", "1.0", "VERHEUL Jasper", "30", "26H 35M 25S", "95725.0", "2021...
113
10
{ "drop_rows": [ [] ], "overwrite_cells": [ [ { "col": "rank", "new_value": "#NUM!", "row": 5 }, { "col": "rank", "new_value": "#RANK", "row": 17 }, { "col": "rank", "new_value": "#NUM!", "row": 20 ...
7,995
8,000
Introduced bad values in rank column. You can recover the missing values by looking at the other column.
End of preview. Expand in Data Studio

RADAR: Benchmarking Language Models on Imperfect Tabular Data

Link: Paper | Code

RADAR

The Robust And Data Aware Reasoning (RADAR) benchmark is designed to evaluate the ability of language models to demonstrate data-awareness—that is, to recognize, reason over, and appropriately handle complex data artifacts such as:

  • Missing data
  • Bad values
  • Outliers
  • Inconsistent formatting
  • Inconsistent multi-column logic

The full dataset includes 53 tasks grounded in real-world data tables and varies across data artifact types and table dimensions (by token count and number of columns). In total, RADAR provides 2,980 unique query-table task instances. We also include two subsets of the data: (1) radar-sizes (RADAR-S) to focus evaluation on table sizes and (2) radar-tasks (RADAR-T) to focus evaluation across all tasks.

📊 Dataset Statistics

Dataset Split Tasks Instances Tokens (K) Cols
RADAR 53 2,980 [2,4,8,16] [5,10,20]
RADAR-T 53 313 8 10
RADAR-S 10 720 [2,4,8,16] [5,10,20]
RADAR Stats

🔭 Dataset Structure

Each task instance comprises of the follwowing data:

  • task_id: a unique id for each source table and query
  • query: the query to ask over the data table
  • answer: ground truth answer to the query
  • artifact_type: the artifact type introduced to the data table for this task
  • artiact_scope: does reasoning over the data artifacts involve only a single column, naively or independetly over multiple columns, or jointly or connected over multiple columns
  • query_cols: the columns invovled in the query
  • artifact_reasoning_cols: the columns invovled in reasoning over the artifacts
  • table: the data table for this task (a dictionary with keys "headers" and "rows" to represent the table column names and rows)
  • num_rows: number of rows in the tbale
  • num_cols: number of columns in the table
  • recovered_tables_transform_spec: The right answer is caluclated over the recovered data table(s). We convert the data table in table to the recovered data table(s) using this specification indicating which rows to drop and which cells to overwrite.
  • base_data_num_tokens: The number of tokens in the data table (before introducing any data artifact perturbations). This may be slightly different after introducing perturbations.
  • base_data_token_bucket: The token bucket in which this task belongs to (one of 2000, 4000, 8000, and 16000)
  • perturbation_note: Any note about the data artifact perturbation that is introduced.

💻 Loading the Data

Using Hugging Face

from datasets import load_dataset
radar_all = load_dataset("kenqgu/radar", "radar")["test"]
radar_s = load_dataset("kenqgu/radar", "radar-sizes")["test"]
radar_t = load_dataset("kenqgu/radar", "radar-tasks")["test"]

Using included RADAR code to load into more usable pydantic objects (need to install radar first).

from radar.data import load_task_instances_hf

# load the full dataset
tasks, task_summary_df = load_task_instances_hf(split="full")
tasks_s, _ = load_task_instances_hf(split="sizes")
tasks_t, _ = load_task_instances_hf(split="tasks")

# view the table as a pandas dataframe
tasks[0].table_df.head()

📖 Citation

If you use RADAR in your research, please cite our paper:

@article{gu2025radar,
  title={RADAR: Benchmarking Language Models on Imperfect Tabular Data}, 
  author={Ken Gu and Zhihan Zhang and Kate Lin and Yuwei Zhang and Akshay Paruchuri and Hong Yu and Mehran Kazemi and Kumar Ayush and A. Ali Heydari and Maxwell A. Xu and Girish Narayanswamy and Yun Liu and Ming-Zher Poh and Yuzhe Yang and Mark Malhotra and Shwetak Patel and Hamid Palangi and Xuhai Xu and Daniel McDuff and Tim Althoff and Xin Liu},
  year={2025},
  eprint={2506.08249},
  archivePrefix={arXiv},
  primaryClass={cs.DB},
  url={https://arxiv.org/abs/2506.08249}, 
}
Downloads last month
110

Collection including kenqgu/RADAR

Paper for kenqgu/RADAR