| # Evaluation Protocol | |
| The benchmark is designed for single-turn exam-mode evaluation. | |
| Each item is presented once with: | |
| - German problem text, | |
| - five answer choices `A`--`E`, | |
| - any question image, | |
| - any image-based answer options. | |
| The model must return either one final answer letter or an explicit abstention token such as `Declined`. | |
| Scoring follows the original contest rules: | |
| - correct answer: `+points`, | |
| - wrong answer: `-points / 4`, | |
| - abstention: `0`. | |
| Accuracy treats abstentions as incorrect so that solving performance and abstention policy can be analyzed separately. | |
| Exam-level results should report both raw contest points and normalized percent maximum score: | |
| ```text | |
| percent_max = total_points / maximum_possible_points * 100 | |
| ``` | |
| Evaluation should avoid tool use, retrieval, multi-turn correction, or item revisiting unless explicitly reported as a separate protocol. | |