Datasets:
idx int64 0 50k | image unknown | latex stringlengths 2 510 | source stringclasses 8
values |
|---|---|---|---|
0 | [
137,
80,
78,
71,
13,
10,
26,
10,
0,
0,
0,
13,
73,
72,
68,
82,
0,
0,
2,
56,
0,
0,
0,
95,
8,
2,
0,
0,
0,
106,
121,
42,
106,
0,
0,
216,
112,
73,
68,
65,
84,
120,
156,
213,
253,
73,
178,
36,
73,
178,
36,
8,
138,
17... | \begin{gathered}h^{-2\sigma}a^{-2k + \eta}= h^{-2\sigma}(h^{\frac{\sigma}{k}}(\log(h^{-1}))^{-T})^{2k - \eta}\le e^{\frac{\sigma}{k}}\log(h^{-1})^{-\frac{T}{3}}\le C \log(h^{-1})^{-\frac{T}{3}}, h^{-2\sigma}a^{-1 - 2\delta + k + \eta}\le h^{-2\sigma}a^{-2k + \eta}\le C \log(h^{-1})^{-\frac{T}{3}}. \end{gathered} | oleehyo |
1 | [
137,
80,
78,
71,
13,
10,
26,
10,
0,
0,
0,
13,
73,
72,
68,
82,
0,
0,
0,
180,
0,
0,
0,
66,
8,
2,
0,
0,
0,
113,
100,
107,
97,
0,
0,
18,
168,
73,
68,
65,
84,
120,
156,
237,
92,
203,
111,
27,
213,
219,
30,
207,
205,
... | [F]=\beta, \ \chi(F)=n | oleehyo |
2 | [
137,
80,
78,
71,
13,
10,
26,
10,
0,
0,
0,
13,
73,
72,
68,
82,
0,
0,
0,
146,
0,
0,
0,
53,
8,
2,
0,
0,
0,
37,
0,
37,
143,
0,
0,
50,
182,
73,
68,
65,
84,
120,
156,
165,
188,
217,
146,
36,
71,
178,
37,
166,
155,
15... | \widehat\alpha(I_{n+1, c}) = \frac{n+1}{c}. | oleehyo |
3 | [
137,
80,
78,
71,
13,
10,
26,
10,
0,
0,
0,
13,
73,
72,
68,
82,
0,
0,
2,
98,
0,
0,
0,
76,
8,
2,
0,
0,
0,
65,
13,
76,
222,
0,
0,
79,
24,
73,
68,
65,
84,
120,
156,
237,
157,
233,
110,
99,
87,
118,
70,
53,
83,
156,
... | [\sigma^i(p')+\sigma^i(j(p'))+\sigma^{-i}(p')+\sigma^{-i}(j(p')) -\sigma^i(x')-\sigma^i(j(x'))-\sigma^{-i}(x')-\sigma^{-i}(j(x'))]= [p'_i+j(p'_{d-i})+p'_{d-i}+ j(p'_{i}) - x'_i-j(x'_{d-i})-x'_{d-i}-j(x'_{i})]= (1+j)([p'_i+p'_{d-i}-x'_i-x'_{d-i}]). | oleehyo |
4 | "iVBORw0KGgoAAAANSUhEUgAAAsYAAACACAIAAACHlDi6AAEAAElEQVR4nEz9R7MtWZYeiG3t2o+8+smIFzIjZaEERINs0AzkDCS(...TRUNCATED) | F:V\rightarrow V | mathwriting |
5 | "iVBORw0KGgoAAAANSUhEUgAAAU4AAACACAIAAADrpBdOAABHmElEQVR4nO29a3BUV5bn+9fZOklKQkggxEsIycgCAZbBwpiXwRi(...TRUNCATED) | ((\frac{495}{9})^{131}+\frac{183^{6}}{3}) | mathwriting |
6 | "iVBORw0KGgoAAAANSUhEUgAAATIAAACACAIAAAAH0XJYAAAWuUlEQVR4nO1df46byNY9hW3NCtI0mWzhQ6ZRLM0K/PTWkJc/sqz(...TRUNCATED) | \Gamma_{1}(N) | mathwriting |
7 | "iVBORw0KGgoAAAANSUhEUgAAAPAAAAAoCAIAAABPWuCHAAAhwklEQVR4nH196XLkOs4sKVF7LS67eybizPs/2o05t6e91KZd1Bd(...TRUNCATED) | \{Q , Q \}= P + W_{+ \infty}- W_{- \infty} | linxy_synthetic_handwrite |
8 | "iVBORw0KGgoAAAANSUhEUgAAAMgAAAAoCAIAAAA0fiJLAAAjCUlEQVR4nN18eZAj133ee32iATQaR+MGBgPMYHaOnd2dJblcUsv(...TRUNCATED) | "\\gamma_{1}( \\alpha_{1}\\beta_{2}- \\alpha_{2}\\beta_{1}) - \\gamma_{1}\\frac{\\partial \\gamma_{1(...TRUNCATED) | linxy_full |
9 | "iVBORw0KGgoAAAANSUhEUgAAAPwAAAAoCAIAAABVZgAJAABBZElEQVR4nD29jXLkSLKlF/8RADLJqp6euSvTSjJbaU3v/0wrabV(...TRUNCATED) | q(u)=\exp_{A}(u-K_{p}(u)+\log_{A}p) | oleehyo |
End of preview. Expand in Data Studio
latex-ocr-aug
A large-scale LaTeX OCR dataset with multiple augmentation variants, designed for training image-to-LaTeX models. Contains over 1.38M training samples across five augmentation levels, plus validation and test splits.
Dataset Summary
| Split | Subset | Samples | Shards |
|---|---|---|---|
| train | raw | 1,389,527 | 28 |
| train | light | 1,389,527 | 28 |
| train | heavy | 1,389,527 | 28 |
| train | light_text | 1,389,527 | 56 |
| train | heavy_text | 1,389,527 | 56 |
| validation | — | 77,195 | 2 |
| test | — | 77,195 | 2 |
Dataset Structure
latex-ocr-aug/
├── train/
│ ├── raw/ # No augmentation — original rendered formula images
│ ├── light/ # Light augmentation (mild noise, slight blur, small rotation)
│ ├── heavy/ # Heavy augmentation (strong distortion, shadow, perspective)
│ ├── light_text/ # Light augmentation + surrounding text context
│ └── heavy_text/ # Heavy augmentation + surrounding text context
├── validation/ # Held-out validation split
└── test/ # Held-out test split
Each parquet file contains the following columns:
| Column | Type | Description |
|---|---|---|
image |
bytes | PNG image of the rendered LaTeX formula |
latex |
string | Ground-truth LaTeX source string |
Augmentation Levels
- raw: Clean renders with no augmentation. Use for baseline evaluation.
- light: Mild augmentations — slight blur, small brightness/contrast jitter, minimal rotation. Suitable for general training.
- heavy: Strong augmentations — heavy distortion, shadows, perspective warp, ink simulation. Designed for robustness.
- light_text / heavy_text: Same as light/heavy but the formula image is embedded inside a larger document-like context with surrounding text, simulating real-world document scanning.
Usage
Load a specific subset
from datasets import load_dataset
# Load raw train split
ds = load_dataset("harryrobert/latex-ocr-aug", data_dir="train/raw", split="train")
# Load heavy augmentation
ds = load_dataset("harryrobert/latex-ocr-aug", data_dir="train/heavy", split="train")
# Load validation
ds = load_dataset("harryrobert/latex-ocr-aug", data_dir="validation", split="train")
Iterate samples
for sample in ds:
image = sample["image"] # PIL image or bytes
latex = sample["latex"] # LaTeX string
Intended Use
This dataset is intended for training and evaluating sequence-to-sequence models that convert formula images to LaTeX, such as:
- Encoder-decoder transformers (e.g., TrOCR, Donut, custom ViT + decoder)
- Autoregressive decoder models fine-tuned on formula recognition
The multiple augmentation variants allow training with curriculum learning (start on raw or light, gradually introduce heavy) or multi-task sampling across subsets.
License
MIT
- Downloads last month
- 24