Llabres commited on
Commit
ddd92a7
·
verified ·
1 Parent(s): 7a8d82c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -135
README.md CHANGED
@@ -25,7 +25,7 @@ configs:
25
 
26
  <h1 align="center">DocVQA 2026 | ICDAR2026 Competition on Multimodal Reasoning over Documents in Multiple Domains</h1>
27
 
28
- <p align="center">
29
  <a href="https://www.docvqa.org/challenges/2026">
30
  <img src="https://img.shields.io/badge/🌐_Website-DocVQA.org-orange.svg" alt="Competition Website">
31
  </a>
@@ -35,29 +35,64 @@ configs:
35
  <a href="https://github.com/VLR-CVC/DocVQA2026">
36
  <img src="https://img.shields.io/badge/GitHub-Eval_Code-black.svg?logo=github&logoColor=white" alt="GitHub Repository">
37
  </a>
 
 
 
38
  </p>
39
 
40
  Building upon previous DocVQA benchmarks, this evaluation dataset introduces challenging reasoning questions over a diverse collection of documents spanning eight domains, including business reports, scientific papers, slides, posters, maps, comics, infographics, and engineering drawings.
41
 
42
  By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
43
 
44
- ## 🏆 Competition Hosting & Test Set
45
 
46
- The official DocVQA 2026 competition is hosted on the **Robust Reading Competition (RRC)** platform, which provides the standardized framework for our leaderboards, submissions, and result tracking.
47
 
48
- > [!NOTE]
49
- > **Test Set Status:** *Coming Soon!* By the time being, please use the provided validation set and the evaluation code.
50
 
 
 
51
 
52
- <p align="center">
53
- <a href="https://rrc.cvc.uab.es/?ch=34" style="background-color: #007bff; color: white; padding: 12px 24px; text-decoration: none; border-radius: 6px; font-weight: bold; font-size: 18px; display: inline-block;">
54
- Join the Challenge on the RRC Platform
55
- </a>
56
- </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
 
58
 
 
59
 
60
- ## Load & Inspect the Data
 
 
 
 
 
 
 
 
61
 
62
  ```python
63
  from datasets import load_dataset
@@ -76,7 +111,7 @@ print(f"Document ID: {doc_id} ({category})")
76
  # 'document' is a list of PIL Images (one for each page)
77
  images = sample["document"]
78
  print(f"Number of pages: {len(images)}")
79
- images[0].show()
80
 
81
  # 4. Access Questions and Answers
82
  questions = sample["questions"]
@@ -91,11 +126,12 @@ for q, q_id, a in zip(questions['question'], questions['question_id'], answers['
91
  print("-" * 50)
92
  ```
93
 
94
- ## Structure of a Sample
 
95
 
96
  <details>
97
  <summary><b>Click to expand the JSON structure</b></summary>
98
-
99
  ```json
100
  {
101
  "doc_id": "maps_2",
@@ -137,11 +173,13 @@ for q, q_id, a in zip(questions['question'], questions['question_id'], answers['
137
  ]
138
  }
139
  }
 
140
  ```
141
  </details>
142
 
 
143
 
144
- ## Results
145
 
146
  <p align="center">
147
  <img src="./assets/results_chart.jpg" alt="DocVQA 2026 Results Chart" width="80%">
@@ -149,138 +187,73 @@ for q, q_id, a in zip(questions['question'], questions['question_id'], answers['
149
  <em>Figure 1: Performance comparison across domains.</em>
150
  </p>
151
 
152
-
153
  <div align="center">
154
- <table>
155
- <thead>
156
- <tr>
157
- <th align="left">Category</th>
158
- <th align="center">Gemini 3 Pro Preview</th>
159
- <th align="center">GPT-5.2</th>
160
- <th align="center">Gemini 3 Flash Preview</th>
161
- <th align="center">GPT-5 Mini</th>
162
- </tr>
163
- </thead>
164
- <tbody>
165
- <tr>
166
- <td align="left"><b>Overall Accuracy</b></td>
167
- <td align="center"><b>0.375</b></td>
168
- <td align="center">0.350</td>
169
- <td align="center">0.3375</td>
170
- <td align="center">0.225</td>
171
- </tr>
172
- <tr>
173
- <td align="left">Business Report</td>
174
- <td align="center">0.400</td>
175
- <td align="center"><b>0.600</b></td>
176
- <td align="center">0.200</td>
177
- <td align="center">0.300</td>
178
- </tr>
179
- <tr>
180
- <td align="left">Comics</td>
181
- <td align="center">0.300</td>
182
- <td align="center">0.200</td>
183
- <td align="center"><b>0.400</b></td>
184
- <td align="center">0.100</td>
185
- </tr>
186
- <tr>
187
- <td align="left">Engineering Drawing</td>
188
- <td align="center">0.300</td>
189
- <td align="center">0.300</td>
190
- <td align="center"><b>0.500</b></td>
191
- <td align="center">0.200</td>
192
- </tr>
193
- <tr>
194
- <td align="left">Infographics</td>
195
- <td align="center"><b>0.700</b></td>
196
- <td align="center">0.600</td>
197
- <td align="center">0.500</td>
198
- <td align="center">0.500</td>
199
- </tr>
200
- <tr>
201
- <td align="left">Maps</td>
202
- <td align="center">0.000</td>
203
- <td align="center"><b>0.200</b></td>
204
- <td align="center">0.000</td>
205
- <td align="center">0.100</td>
206
- </tr>
207
- <tr>
208
- <td align="left">Science Paper</td>
209
- <td align="center">0.300</td>
210
- <td align="center">0.400</td>
211
- <td align="center"><b>0.500</b></td>
212
- <td align="center">0.100</td>
213
- </tr>
214
- <tr>
215
- <td align="left">Science Poster</td>
216
- <td align="center"><b>0.300</b></td>
217
- <td align="center">0.000</td>
218
- <td align="center">0.200</td>
219
- <td align="center">0.000</td>
220
- </tr>
221
- <tr>
222
- <td align="left">Slide</td>
223
- <td align="center"><b>0.700</b></td>
224
- <td align="center">0.500</td>
225
- <td align="center">0.400</td>
226
- <td align="center">0.500</td>
227
- </tr>
228
- </tbody>
229
- </table>
230
- </div>
231
 
232
- > [!NOTE]
233
- > **Evaluation Parameters:**
234
- > * **GPT Models:** "High thinking" enabled, temperature set to `1.0`.
235
- > * **Gemini Models:** "High thinking" enabled, temperature set to `0.0`.
236
-
237
- > [!WARNING]
238
- > **API Constraints:** > Both models were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure. For example, the file input limit for OpenAI models is 50MB, and several comics in this dataset surpass that threshold.
239
-
240
-
241
- --------
242
-
243
- <div style="border: 1px solid black; background-color: white; color: black; padding: 20px; border-radius: 8px;">
244
- <h2 style="margin-top: 0; color: black;">📝 Submission Guidelines & Formatting Rules</h2>
245
- <p>To ensure fair and accurate evaluation across all participants, submissions are evaluated using automated metrics. Therefore, all model outputs must strictly adhere to the following formatting rules:</p>
246
- <ul>
247
- <li><strong style="color: black;">Source Adherence:</strong> Only provide answers found directly within the document. If the question is unanswerable given the provided image, the response must be exactly: <code>"Unknown"</code>.</li>
248
- <li><strong style="color: black;">Multiple Answers:</strong> List multiple answers in their order of appearance, separated by a comma and a single space. <strong style="color: black;">Do not</strong> use the word "and". <em>(Example: <code>Answer A, Answer B</code>)</em></li>
249
- <li><strong style="color: black;">Numbers & Units:</strong> Convert units to their standardized abbreviations (e.g., use <code>kg</code> instead of "kilograms", <code>m</code> instead of "meters"). Always place a single space between the number and the unit. <em>(Example: <code>50 kg</code>, <code>10 USD</code>)</em></li>
250
- <li><strong style="color: black;">Percentages:</strong> Attach the <code>%</code> symbol directly to the number with no space. <em>(Example: <code>50%</code>)</em></li>
251
- <li><strong style="color: black;">Dates:</strong> Convert all dates to the standardized <code>YYYY-MM-DD</code> format. <em>(Example: "Jan 1st 24" becomes <code>2024-01-01</code>)</em></li>
252
- <li><strong style="color: black;">Decimals:</strong> Use a single period (<code>.</code>) as a decimal separator, never a comma. <em>(Example: <code>3.14</code>)</em></li>
253
- <li><strong style="color: black;">Thousands Separator:</strong> Do not use commas to separate large numbers. <em>(Example: <code>1000</code>, not <code>1,000</code>)</em></li>
254
- <li><strong style="color: black;">No Filler Text:</strong> Output <strong style="color: black;">only</strong> the requested data. Do not frame your answer in full sentences (e.g., avoid "The answer is...").</li>
255
- </ul>
256
- <p><strong style="color: black;">Final Output Format:</strong> For subissions on the RRC server the final extracted answer is need. We recommend that your system prefixes the final response with the following exact phrasing:</p>
257
- <pre style="background-color: white; color: black; border: 1px dashed black; padding: 10px; border-radius: 4px;"><code>FINAL ANSWER: [Your formatted answer]</code></pre>
258
  </div>
259
 
260
- ---------
 
 
 
 
 
 
 
 
261
 
262
- ## Evaluation Code & Baselines
 
 
 
 
 
 
263
 
264
- To ensure consistency and fairness, all test submissions are evaluated using our official automated evaluation pipeline on [RRC Sever](https://rrc.cvc.uab.es/?ch=34&com=introduction).
 
 
 
265
 
266
- For the validation set this pipeline handles the extraction of your model's answers and applies both strict formatting checks (for numbers, dates, and units) and relaxed text matching (ANLS) for text-based answers.
267
- You can find the complete, ready-to-use evaluation script in our official GitHub repository:
268
- 👉 **[VLR-CVC/DocVQA2026 GitHub Repository](https://github.com/VLR-CVC/DocVQA2026)**
 
269
 
270
- ### What you will find in the repository:
 
 
271
 
272
- * **The Evaluator Script:** The core logic used to parse your model's outputs and calculate the final scores. You can use this script to test and evaluate your predictions locally before making an official submission.
273
- * **The Baseline Master Prompt:** We have included the exact prompt structure (`get_evaluation_prompt()`) used for our baseline experiments. This prompt is heavily engineered to enforce the competition's mandatory reasoning protocols and strict output formatting.
 
274
 
275
- We highly recommend reviewing both the evaluation script and the Master Prompt. You are welcome to use the provided prompt out-of-the-box or adapt it to better guide your own custom models!
 
 
276
 
277
- ## Dataset Structure
 
 
278
 
279
- The dataset consists of:
280
- 1. **Images:** High-resolution PNG renders of document pages located in the `images/` directory.
281
- 2. **Annotations:** A Parquet file (`val.parquet`) containing the questions, answers, and references to the image paths.
 
282
 
 
283
 
284
- ## Contact
285
 
286
- For questions, technical support, or inquiries regarding the DocVQA 2026 dataset and competition framework, please reach out to the organizing committee at [docvqa@cvc.uab.cat](mailto:docvqa@cvc.uab.cat).
 
25
 
26
  <h1 align="center">DocVQA 2026 | ICDAR2026 Competition on Multimodal Reasoning over Documents in Multiple Domains</h1>
27
 
28
+ <p align="center">
29
  <a href="https://www.docvqa.org/challenges/2026">
30
  <img src="https://img.shields.io/badge/🌐_Website-DocVQA.org-orange.svg" alt="Competition Website">
31
  </a>
 
35
  <a href="https://github.com/VLR-CVC/DocVQA2026">
36
  <img src="https://img.shields.io/badge/GitHub-Eval_Code-black.svg?logo=github&logoColor=white" alt="GitHub Repository">
37
  </a>
38
+ <a href="https://rrc.cvc.uab.es/?ch=34">
39
+ <img src="https://img.shields.io/badge/RRC-Competition_Platform-green.svg" alt="RRC Competition Platform">
40
+ </a>
41
  </p>
42
 
43
  Building upon previous DocVQA benchmarks, this evaluation dataset introduces challenging reasoning questions over a diverse collection of documents spanning eight domains, including business reports, scientific papers, slides, posters, maps, comics, infographics, and engineering drawings.
44
 
45
  By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.
46
 
47
+ # Datasets
48
 
49
+ This dataset card corresponds to the **DocVQA 2026** benchmark used in the **ICDAR 2026 competition on multimodal reasoning over documents in multiple domains**.
50
 
51
+ The benchmark includes:
 
52
 
53
+ - **Validation set** — contains public answers and is intended for local development and experimentation.
54
+ - **Test set** — contains private answers and is used for the official competition ranking.
55
 
56
+ The official competition is hosted on the **Robust Reading Competition (RRC)** platform: https://rrc.cvc.uab.es/?ch=34
57
+
58
+ Participants interested in the leaderboard and official submissions should register and submit their predictions through the RRC server.
59
+
60
+ ### Validation Set
61
+
62
+ The validation split includes public ground-truth answers and can be evaluated:
63
+
64
+ - **Locally**, using the official evaluation code: https://github.com/VLR-CVC/DocVQA2026
65
+
66
+ - **Online**, by submitting predictions to the RRC platform: https://rrc.cvc.uab.es/?ch=34&com=mymethods&task=1
67
+
68
+ ### Test Set
69
+
70
+ The test split contains **private answers** and therefore can only be evaluated through the official RRC platform: https://rrc.cvc.uab.es/?ch=34&com=mymethods&task=1
71
+
72
+ # Participation Requirements
73
+
74
+ To participate in the competition:
75
+
76
+ 1. A method must be submitted on the **test set by April 3, 2026** on the RRC platform.
77
+ 2. A **one or two page report** must be submitted by email to **docvqa@cvc.uab.cat** by **April 17, 2026**.
78
+
79
+ These reports will be included in the competition publication in the proceedings of the **International Conference on Document Analysis and Recognition (ICDAR)**, held in **Vienna, Austria**.
80
+
81
+ # Competition Categories
82
 
83
+ There are **three participation categories**, depending on the total number of parameters of the submitted method.
84
 
85
+ This count must include, all parameters whether active or not, and all parameters across all models used in agentic systems
86
 
87
+ Categories:
88
+
89
+ - **Up to 8B parameters**
90
+ - **Over 8B parameters and up to 35B**
91
+ - **Over 35B parameters**
92
+
93
+ ---
94
+
95
+ # Load & Inspect the Data
96
 
97
  ```python
98
  from datasets import load_dataset
 
111
  # 'document' is a list of PIL Images (one for each page)
112
  images = sample["document"]
113
  print(f"Number of pages: {len(images)}")
114
+ images[0].show()
115
 
116
  # 4. Access Questions and Answers
117
  questions = sample["questions"]
 
126
  print("-" * 50)
127
  ```
128
 
129
+
130
+ # Structure of a Sample
131
 
132
  <details>
133
  <summary><b>Click to expand the JSON structure</b></summary>
134
+
135
  ```json
136
  {
137
  "doc_id": "maps_2",
 
173
  ]
174
  }
175
  }
176
+
177
  ```
178
  </details>
179
 
180
+ ---
181
 
182
+ # Results
183
 
184
  <p align="center">
185
  <img src="./assets/results_chart.jpg" alt="DocVQA 2026 Results Chart" width="80%">
 
187
  <em>Figure 1: Performance comparison across domains.</em>
188
  </p>
189
 
 
190
  <div align="center">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
 
192
+ | Category | Gemini 3 Pro Preview | GPT-5.2 | Gemini 3 Flash Preview | GPT-5 Mini |
193
+ |---|---|---|---|---|
194
+ | **Overall Accuracy** | **0.375** | 0.350 | 0.3375 | 0.225 |
195
+ | Business Report | 0.400 | **0.600** | 0.200 | 0.300 |
196
+ | Comics | 0.300 | 0.200 | **0.400** | 0.100 |
197
+ | Engineering Drawing | 0.300 | 0.300 | **0.500** | 0.200 |
198
+ | Infographics | **0.700** | 0.600 | 0.500 | 0.500 |
199
+ | Maps | 0.000 | **0.200** | 0.000 | 0.100 |
200
+ | Science Paper | 0.300 | 0.400 | **0.500** | 0.100 |
201
+ | Science Poster | **0.300** | 0.000 | 0.200 | 0.000 |
202
+ | Slide | **0.700** | 0.500 | 0.400 | 0.500 |
203
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
  </div>
205
 
206
+ > **Evaluation Parameters**
207
+ >
208
+ > - GPT models: "High thinking" enabled, temperature = 1.0
209
+ > - Gemini models: "High thinking" enabled, temperature = 0.0
210
+
211
+ > **API Constraints**
212
+ >
213
+ > Both model families were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure.
214
+ > For example, several comics exceed the 50MB file input limit present in some API models.
215
 
216
+ ---
217
+
218
+ # Answer Formatting Rules
219
+ - **Source Adherence**
220
+ Only provide answers found directly within the document.
221
+ If the question is unanswerable given the provided image, the response must be exactly:
222
+ `Unknown`
223
 
224
+ - **Multiple Answers**
225
+ List multiple answers in their order of appearance, separated by a comma and a single space.
226
+ Do not use the word "and".
227
+ Example: `Answer A, Answer B`
228
 
229
+ - **Numbers & Units**
230
+ Convert units to standardized abbreviations (`kg`, `m`, etc.).
231
+ Always place a single space between number and unit.
232
+ Example: `50 kg`, `10 USD`
233
 
234
+ - **Percentages**
235
+ Attach `%` directly to the number with no space.
236
+ Example: `50%`
237
 
238
+ - **Dates**
239
+ Convert dates to `YYYY-MM-DD`.
240
+ Example: `Jan 1st 24` → `2024-01-01`
241
 
242
+ - **Decimals**
243
+ Use a period `.` as decimal separator.
244
+ Example: `3.14`
245
 
246
+ - **Thousands Separator**
247
+ Do not use commas.
248
+ Example: `1000`
249
 
250
+ - **No Filler Text**
251
+ Output only the requested data.
252
+
253
+ ---
254
 
255
+ # Contact
256
 
257
+ For questions, technical support, or inquiries regarding the DocVQA 2026 dataset and competition framework: **docvqa@cvc.uab.cat**
258
 
259
+ For participation, leaderboard, and submissions please use the **RRC platform**: https://rrc.cvc.uab.es/?ch=34