saurabh-gnirut commited on
Commit
04fbed6
·
verified ·
1 Parent(s): 81739fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +292 -0
README.md CHANGED
@@ -1,3 +1,295 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ pretty_name: open-mm-rl
6
+ size_categories:
7
+ - n<1K
8
+ tags:
9
+ - chemistry
10
+ - physics
11
+ - math
12
+ - biology
13
+ - science
14
+ - RL
15
+ task_categories:
16
+ - question-answering
17
  ---
18
+
19
+ # Dataset Summary
20
+
21
+ **Open-MM-RL** is a multimodal STEM reasoning dataset covering **Physics, Mathematics, Biology, and Chemistry**. It is designed for problems that require models to interpret visual information and combine it with step-by-step analytical reasoning.
22
+
23
+ Compared with existing multimodal reasoning benchmarks, Open-MM-RL broadens the evaluation setting beyond standard single-image question answering by including multi-panel and multi-image tasks that require integrating information across more complex visual contexts. As rarely in real-life problems is context confined to a single image. Instead, the necessary information is often fragmented across multiple related images, requiring scientists to reason across them to find the solution.
24
+
25
+ The dataset includes three multimodal input formats:
26
+
27
+ * **Single-image problems**: one image paired with one question.
28
+ * **Multi-panel problems**: a composite or panel-based visual paired with one question.
29
+ * **Multi-image problems**: multiple separate images paired with one question.
30
+
31
+ These formats increase task complexity by requiring models to reason not only from text, but also across visual layouts, multiple views, and distributed evidence.
32
+
33
+ Across all formats, problems are constructed to be **self-contained, unambiguous, reasoning-intensive, and verifiable** making the dataset useful both as an evaluation benchmark and as a training resource for reasoning-focused models.
34
+
35
+ A key distinguishing feature of this dataset is its focus on PhD-level STEM problem solving across all three multimodal formats. This makes it possible to assess both advanced subject-matter reasoning and a model's ability to synthesize information across increasingly complex visual inputs.
36
+
37
+ Unlike scientific figure benchmarks that rely significantly on captions, examples in this dataset are designed to be answered directly from the provided image or images together with the question.
38
+
39
+ ## Supported Tasks and Applications
40
+
41
+ This dataset is intended for settings where reliable answer checking matters. In particular, it is well suited for:
42
+
43
+ * Outcome-supervised training
44
+ * Reinforcement learning for reasoning
45
+ * Reward modeling
46
+ * Automatic evaluation of multimodal reasoning systems
47
+ * Benchmarking frontier model performance on verifiable STEM tasks
48
+
49
+ Because each example has a deterministic target answer, the dataset supports training and evaluation pipelines that depend on objective correctness rather than subjective preference judgments.
50
+
51
+ ## Why This Dataset Is Useful
52
+
53
+ The dataset is designed to occupy a practical middle ground: difficult enough to expose reasoning failures, but structured enough that correctness can be measured automatically. This makes it useful both for benchmarking current models and for training future multimodal reasoning systems.
54
+
55
+ Its coverage of single-image, multi-panel, and multi-image inputs also makes it possible to study how reasoning performance changes as visual evidence becomes more distributed and structurally complex.
56
+
57
+ ## Task Format
58
+
59
+ The task is to produce a final answer to a self-contained STEM question grounded in the provided visual input.
60
+
61
+ Each problem consists of:
62
+
63
+ * A question
64
+ * One or more associated images
65
+ * A deterministic ground-truth answer
66
+
67
+ The dataset is focused on answer generation for verifiable STEM reasoning, rather than caption generation, retrieval, or free-form scientific description.
68
+
69
+ ## Dataset Structure
70
+
71
+ Each example typically contains the following components:
72
+
73
+ | Field | Description |
74
+ | ------------------ | --------------------------------------------------------------------------------------------------------------------------- |
75
+ | `question` | The text of the STEM reasoning problem. |
76
+ | `image` / `images` | The visual input associated with the problem. This may be a single image, a multi-panel image, or multiple separate images. |
77
+ | `format` | The multimodal format label, such as `single_image`, `multi_panel`, or `multi_image`. |
78
+ | `subject` | The scientific domain, such as `Physics`, `Mathematics`, `Biology`, or `Chemistry`. |
79
+ | `answer` | The deterministic ground-truth final answer. |
80
+ | `answer_type` | The expected answer form, such as numeric, symbolic, equation, short text, or LaTeX. |
81
+ | `metadata` | Optional evaluation annotations, such as difficulty, grading method, units, or tolerance. |
82
+
83
+ Exact field names may vary by release version.
84
+
85
+ ### Example Instance
86
+
87
+ ```json
88
+ {
89
+ "question": "Given the visual input, determine the final value of the requested quantity.",
90
+ "images": ["image_001.png"],
91
+ "format": "single_image",
92
+ "subject": "Physics",
93
+ "answer": "42",
94
+ "answer_type": "numeric",
95
+ "metadata": {
96
+ "difficulty": "PhD-level",
97
+ "grading_method": "numerical_tolerance"
98
+ }
99
+ }
100
+ ```
101
+
102
+ For multi-image examples, the `images` field may contain multiple image paths:
103
+
104
+
105
+
106
+
107
+ ## Subject Coverage
108
+
109
+ The dataset spans multiple STEM disciplines:
110
+
111
+ * Physics
112
+ * Mathematics
113
+ * Biology
114
+ * Chemistry
115
+
116
+ This cross-domain coverage supports evaluation of both domain-specific reasoning and generalization across scientific problem types. The problems are designed to emphasize analytical reasoning, quantitative problem solving, symbolic manipulation, and integration of visual evidence.
117
+
118
+ ## Difficulty Profile
119
+
120
+ The tasks are designed to reflect advanced STEM reasoning at or near the PhD level. They are intended to require more than surface-level perception or direct extraction from the image, often involving multi-step derivations, symbolic manipulation, quantitative analysis, and synthesis of information across complex visual inputs.
121
+
122
+ The dataset aims for a learning-efficient regime in which:
123
+
124
+ * The problems are not easy enough to be saturated.
125
+ * The success rate is not so low that all learning signals disappear.
126
+ * Difficulty varies across examples and multimodal formats.
127
+ * Stronger models can still make measurable progress.
128
+
129
+ The inclusion of single-image, multi-panel, and multi-image questions creates a richer spread of difficulty and enables more targeted analysis of model strengths and weaknesses.
130
+
131
+ ## Problem and Answer Design
132
+
133
+ Each example is written so that the final response is deterministic and programmatically checkable. The focus is on tasks where evaluation depends on the correctness of the answer rather than subjective judgment.
134
+
135
+ Typical answer formats include:
136
+
137
+ * Numerical values
138
+ * Symbolic expressions
139
+ * Simplified algebraic forms
140
+ * Short text
141
+ * Identities or derived equations
142
+ * Canonical LaTeX outputs
143
+
144
+ Because the answers are deterministic, the dataset is especially appropriate for workflows that need stable reward signals or automatic grading at scale.
145
+
146
+ ## Verifiability and Automatic Evaluation
147
+
148
+ A core design principle of this dataset is objective verifiability.
149
+
150
+ Each problem is constructed so that:
151
+
152
+ * The final answer is deterministic.
153
+ * Correctness can be evaluated programmatically.
154
+ * No subjective interpretation is required.
155
+ * There is a clear separation between reasoning process and final outcome.
156
+
157
+ Depending on the task, answers can be evaluated using:
158
+
159
+ * Normalized exact match
160
+ * Symbolic equivalence checks
161
+ * Numerical tolerance thresholds
162
+ * Unit-aware validation, where applicable
163
+
164
+ This makes the dataset well suited for reproducible benchmarking and scalable automated evaluation.
165
+
166
+ ## Data Creation and Quality Control
167
+
168
+ All problems are developed and reviewed with an emphasis on scientific correctness and benchmark reliability. Tasks undergo two rounds of expert review by PhD-level domain specialists.
169
+
170
+ Review criteria include:
171
+
172
+ * Correctness of the prompt
173
+ * Correctness of the target answer
174
+ * Clarity of the reasoning path implied by the problem
175
+ * Absence of ambiguity in interpretation
176
+ * Originality and resistance to trivial lookup
177
+ * Identification of cases where models fail because of reasoning errors rather than annotation issues
178
+
179
+ This process is intended to ensure that dataset difficulty comes from the task itself, not from noisy labeling or underspecified questions.
180
+
181
+ ## Relevance for Reinforcement Learning
182
+
183
+ The dataset is particularly useful for reasoning-oriented reinforcement learning because each example supports an objective reward signal.
184
+
185
+ A simple setup is:
186
+
187
+ * **Input**: question and associated image(s)
188
+ * **Model output**: final predicted answer
189
+ * **Reward**: computed from agreement with the ground truth
190
+
191
+ Possible reward schemes include:
192
+
193
+ * Full credit for exact or equivalent answers
194
+ * No credit for incorrect answers
195
+ * Optional partial credit for numerically close or symbolically related outputs
196
+
197
+ This structure supports training approaches where progress depends on measurable correctness rather than preference judgments. It is therefore a natural fit for:
198
+
199
+ * Policy optimization
200
+ * Reward-guided fine-tuning
201
+ * Outcome-supervised learning
202
+ * Iterative self-improvement pipelines
203
+
204
+ ## Intended Uses
205
+
206
+ This dataset is intended for:
207
+
208
+ * Benchmarking multimodal STEM reasoning systems
209
+ * Evaluating reasoning performance under verifiable answer supervision
210
+ * Reinforcement learning and outcome-supervised training
211
+ * Reward modeling and automated grading research
212
+ * Studying failure modes across single-image, multi-panel, and multi-image settings
213
+
214
+ ## Out-of-Scope Uses
215
+
216
+ This dataset is not designed for:
217
+
218
+ * Open-ended caption generation
219
+ * Subjective evaluation of scientific writing quality
220
+ * Conversational tutoring or pedagogical dialogue assessment
221
+ * Retrieval-based figure understanding using captions or external metadata
222
+ * Broad real-world safety judgments or non-verifiable open-ended reasoning
223
+
224
+ Because the dataset emphasizes deterministic final answers, it is less informative for tasks that require subjective interpretation or unconstrained explanation quality.
225
+
226
+ ## Limitations
227
+
228
+ Open-MM-RL is intentionally focused on verifiable STEM reasoning. As a result:
229
+
230
+ * It may not measure open-ended explanatory quality.
231
+ * It may not capture all aspects of scientific communication.
232
+ * It may not evaluate tutoring ability or interactive reasoning.
233
+ * It is not intended as a complete measure of general scientific intelligence.
234
+ * Automatic grading may require task-specific normalization for symbolic, numeric, or unit-bearing answers.
235
+
236
+ The dataset is best interpreted as a benchmark for final-answer correctness under multimodal STEM reasoning constraints.
237
+
238
+ ## Ethical Considerations
239
+
240
+ The dataset is designed for scientific reasoning and model evaluation. It does not intentionally contain personal data, demographic labels, or sensitive personal information.
241
+
242
+ Users should avoid applying the dataset outside its intended scope, especially for real-world scientific, medical, safety-critical, or educational decisions without additional expert validation.
243
+
244
+ ## Planned Extensions
245
+
246
+ Future versions of the dataset may introduce structured hinting or nudge-based augmentations for especially difficult problems.
247
+
248
+ The motivation is straightforward: in online reinforcement learning, examples with near-zero success rates often produce little or no useful learning signal. In such cases, lightweight guidance can help convert otherwise unsolved samples into learnable ones without revealing the full solution.
249
+
250
+ Possible future additions include:
251
+
252
+ * High-level conceptual hints
253
+ * Difficulty-controlled nudges
254
+ * Conditional hinting for zero-pass examples
255
+ * Augmented rollouts for frontier-level tasks
256
+
257
+ The goal of these extensions is to preserve the dataset's verifiability while making it more useful for studying how models learn from extremely difficult reasoning problems.
258
+
259
+ ## Citation
260
+
261
+ If you use Open-MM-RL, please cite the dataset as follows:
262
+
263
+ ```bibtex
264
+ @dataset{patil2026openmmrl,
265
+ title = {Open-MM-RL: A Multimodal STEM Reasoning Dataset},
266
+ author = {
267
+ Shukla, Chinmayee and
268
+ Patil, Saurabh and
269
+ Han, Kihwan and
270
+ Tao, Charlotte and
271
+ Tager, Tristan and
272
+ Ukarde, Tejas Mohan and
273
+ Bertollo, Amanda Gollo and
274
+ Pande, Seetesh and
275
+ Verma, Divya and
276
+ Pooja and
277
+ Ramakrishnan and
278
+ Kumari, Surbhi and
279
+ Seth, Harshita and
280
+ Nazim, Muhammad and
281
+ Zia, Muhammad Danish and
282
+ Gupta, Rashi and
283
+ K S, Tharangini and
284
+ Yadav, Yogesh and
285
+ Okayim, Paul and
286
+ Jangra, Mandeep and
287
+ Jhakad, Pooja and
288
+ Panda, Biswajit and
289
+ Jain, Priya
290
+ },
291
+ year = {2026},
292
+ publisher = {Hugging Face},
293
+ howpublished = {\\url{https://huggingface.co/datasets/REPLACE_WITH_REPO_ID}}
294
+ }
295
+ ```