File size: 1,568 Bytes
713d8c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: cc-by-4.0
language:
- en
pretty_name: Molmo2-ER CLEVR
tags:
- embodied-reasoning
- molmo2
- molmo2-er
- vlm-training-data
---

# Molmo2-ER · CLEVR v1.0 (Stanford)

Compositional VQA over rendered 3D primitives (train split only).

This is a re-hosted, **loader-ready subset** of the upstream dataset, used to train [`allenai/Molmo2-ER-4B`](https://huggingface.co/allenai/Molmo2-ER-4B). Files mirror the upstream layout; nothing in the data has been modified.

## Upstream source

- **Original dataset:** [CLEVR v1.0 (Stanford)](https://cs.stanford.edu/people/jcjohns/clevr/)
- **Paper:** *CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning* ([arXiv:1612.06890](https://arxiv.org/abs/1612.06890))
- **License:** `cc-by-4.0` (inherits from upstream)

If you use this data, please cite the original authors:

```bibtex
@misc{johnson2016clevrdiagnosticdatasetcompositional,
  title={CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning},
  author={Justin Johnson and Bharath Hariharan and Laurens van der Maaten and others},
  year={2016},
  eprint={1612.06890},
  archivePrefix={arXiv}
}
```

## Extracting before training

This release ships archives. Extract them in-place before pointing `SPATIAL_DATA_HOME` at this directory:

```bash
unzip CLEVR_v1.0.zip
```

## Usage in Molmo2-ER

See the [`allenai/molmo2`](https://github.com/allenai/molmo2) repository for the data loader and training recipe. The relevant loader class for this dataset lives in `olmo/data/spatial_datasets.py`.