kangaroo_dataset / README.md
kangaroo-dataset-german's picture
Initial anonymized dataset release
d7d4553 verified
metadata
annotations_creators:
  - expert-generated
configs:
  - config_name: default
    data_files:
      - split: train
        path: full.parquet
language:
  - de
language_creators:
  - found
license:
  - cc-by-nc-4.0
multilinguality:
  - monolingual
pretty_name: German Kangaroo Benchmark
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - question-answering
  - visual-question-answering
task_ids:
  - multiple-choice-qa
  - visual-question-answering
tags:
  - mathematics
  - benchmark
  - multimodal
  - german
  - abstention
  - education

German Kangaroo Benchmark

This repository contains the finalized dataset artifact for the German Kangaroo benchmark, a longitudinal multimodal mathematical-reasoning benchmark derived from the German Mathematical Kangaroo competition corpus from 1998--2025.

The authoritative dataset file is:

full.parquet

It contains 3,887 multiple-choice items across 140 exams and five grade buckets. Each row stores the question text, answer options, gold answer, point value, year, grade bucket, modality flag, and embedded image fields for question and option images.

Intended Use

The dataset is intended for evaluating language and vision-language models on German school-mathematics problems under contest-faithful multiple-choice scoring. It supports analysis of text-only reasoning, multimodal reasoning, answer-selection behavior under negative marking, and comparisons between model performance and official aggregate human contest statistics.

The dataset should not be used to infer that a model reasons like a human student, to grade students, or to replace human reference data for exam calibration. The accompanying paper finds that current LLM scores do not provide stable reference anchors for human exam difficulty.

Files

File Description
full.parquet Final corrected benchmark dataset used for the paper.
schema.md Column-level schema description.
evaluation_protocol.md Summary of the contest-faithful evaluation protocol.
croissant.jsonld Croissant metadata with responsible-AI fields for NeurIPS submission.

Dataset Structure

The Parquet file has 3,887 rows and 21 columns:

id
year
group
language
points
problem_number
problem_statement
answer
multimodal
sol_A
sol_B
sol_C
sol_D
sol_E
question_image
sol_A_image_bin
sol_B_image_bin
sol_C_image_bin
sol_D_image_bin
sol_E_image_bin
associated_images_bin

Image columns store PNG bytes or null values. associated_images_bin stores a list of additional embedded image payloads where applicable.

Scoring

The benchmark follows the original contest scoring structure. Items are worth 3, 4, or 5 points. A correct answer receives the full item value, an incorrect answer loses one quarter of the item value, and abstention receives zero points. Normalized exam scores are reported as percent of the maximum achievable score for that exam.

Source and Provenance

The benchmark is curated from public German Mathematical Kangaroo competition materials covering 1998--2025. The dataset construction pipeline rasterizes question regions, applies OCR, preserves embedded visual content, and stores question and option images directly in the Parquet file.

Limitations

The dataset is German-only, so model performance may reflect both mathematical reasoning and German-language competence. The curation process uses OCR and image extraction, which may introduce artifacts not present for original human contest participants. Older contest materials may have appeared in model training data. Official human reference data is aggregate rather than item-level, so the benchmark supports exam-level comparison but not item-response modeling against human responses.

License

This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0): https://creativecommons.org/licenses/by-nc/4.0/

The benchmark is derived from German Mathematical Kangaroo competition materials. Users must provide appropriate attribution to the dataset authors and the original German Mathematical Kangaroo source materials, may use the dataset for non-commercial research and evaluation, and may not use it for commercial purposes without additional permission.

Citation

Citation information will be added after the associated paper is finalized.