davanstrien HF Staff Claude Opus 4.6 (1M context) commited on
Commit
173e9b8
·
1 Parent(s): dafc1a9

Add Falcon OCR script (0.3B, falcon-perception engine)

Browse files

New OCR script using tiiuae/Falcon-OCR with the optimized falcon-perception
inference engine (CUDA graphs + batched paged inference). Achieves 0.31 img/s
on L4, 0.53 img/s on L40S. Supports plain OCR and layout-aware extraction.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Files changed (2) hide show
  1. README.md +2 -1
  2. falcon-ocr.py +445 -0
README.md CHANGED
@@ -7,7 +7,7 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
- 19 OCR scripts covering models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
@@ -33,6 +33,7 @@ That's it! The script will:
33
 
34
  | Script | Model | Size | Backend | Notes |
35
  |--------|-------|------|---------|-------|
 
36
  | `smoldocling-ocr.py` | [SmolDocling](https://huggingface.co/ds4sd/SmolDocling-256M-preview) | 256M | Transformers | DocTags structured output |
37
  | `glm-ocr.py` | [GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) | 0.9B | vLLM | 94.62% OmniDocBench V1.5 |
38
  | `paddleocr-vl.py` | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) | 0.9B | Transformers | 4 task modes (ocr/table/formula/chart) |
 
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
+ 20 OCR scripts covering models from 0.3B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
 
33
 
34
  | Script | Model | Size | Backend | Notes |
35
  |--------|-------|------|---------|-------|
36
+ | `falcon-ocr.py` | [Falcon-OCR](https://huggingface.co/tiiuae/Falcon-OCR) | 0.3B | falcon-perception | 80.3% olmOCR, layout-aware, Apache 2.0 |
37
  | `smoldocling-ocr.py` | [SmolDocling](https://huggingface.co/ds4sd/SmolDocling-256M-preview) | 256M | Transformers | DocTags structured output |
38
  | `glm-ocr.py` | [GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) | 0.9B | vLLM | 94.62% OmniDocBench V1.5 |
39
  | `paddleocr-vl.py` | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) | 0.9B | Transformers | 4 task modes (ocr/table/formula/chart) |
falcon-ocr.py ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub",
6
+ # "pillow",
7
+ # "torch>=2.5",
8
+ # "torchvision",
9
+ # "falcon-perception[ocr]",
10
+ # "tqdm",
11
+ # ]
12
+ # ///
13
+
14
+ """
15
+ Convert document images to text using Falcon OCR with the falcon-perception engine.
16
+
17
+ Uses the optimized OCRInferenceEngine with CUDA graphs and paged inference
18
+ for much faster throughput than the raw transformers API.
19
+
20
+ Features:
21
+ - Compact: Only 0.3B parameters
22
+ - Fast: Optimized inference with CUDA graphs
23
+ - Multi-format: Plain text, LaTeX formulas, HTML tables
24
+ - Layout-aware: Optional 2-stage pipeline (layout detection + per-region OCR)
25
+
26
+ Model: tiiuae/Falcon-OCR
27
+ Backend: falcon-perception (OCRInferenceEngine)
28
+ License: Apache 2.0
29
+
30
+ Examples:
31
+ # Basic text OCR
32
+ uv run falcon-ocr.py input-dataset output-dataset
33
+
34
+ # Layout-aware OCR
35
+ uv run falcon-ocr.py dense-docs output --task-mode layout
36
+
37
+ # Test with small sample
38
+ uv run falcon-ocr.py dataset test --max-samples 5 --shuffle
39
+
40
+ # Run on HF Jobs with GPU
41
+ hf jobs uv run --flavor l4x1 \\
42
+ -s HF_TOKEN \\
43
+ falcon-ocr.py \\
44
+ input-dataset output-dataset --max-samples 10
45
+ """
46
+
47
+ import argparse
48
+ import io
49
+ import json
50
+ import logging
51
+ import os
52
+ import sys
53
+ import time
54
+ from datetime import datetime
55
+ from typing import Any, Dict, Union
56
+
57
+ import torch
58
+ from datasets import load_dataset
59
+ from huggingface_hub import DatasetCard, login
60
+ from PIL import Image
61
+ from tqdm.auto import tqdm
62
+
63
+ logging.basicConfig(level=logging.INFO)
64
+ logger = logging.getLogger(__name__)
65
+
66
+ MODEL_ID = "tiiuae/Falcon-OCR"
67
+
68
+ TASK_MODES = {
69
+ "plain": "Full-page text extraction",
70
+ "layout": "Layout-aware OCR (region detection + per-region extraction)",
71
+ }
72
+
73
+
74
+ def check_cuda_availability():
75
+ if not torch.cuda.is_available():
76
+ logger.error("CUDA is not available. This script requires a GPU.")
77
+ logger.error("For cloud execution, use HF Jobs with --flavor l4x1 or similar.")
78
+ sys.exit(1)
79
+ else:
80
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
81
+
82
+
83
+ def prepare_image(image: Union[Image.Image, Dict[str, Any], str]) -> Image.Image:
84
+ if isinstance(image, Image.Image):
85
+ pil_img = image
86
+ elif isinstance(image, dict) and "bytes" in image:
87
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
88
+ elif isinstance(image, str):
89
+ pil_img = Image.open(image)
90
+ else:
91
+ raise ValueError(f"Unsupported image type: {type(image)}")
92
+ return pil_img.convert("RGB")
93
+
94
+
95
+ def create_dataset_card(
96
+ source_dataset: str,
97
+ task_mode: str,
98
+ num_samples: int,
99
+ processing_time: str,
100
+ image_column: str = "image",
101
+ split: str = "train",
102
+ ) -> str:
103
+ task_description = TASK_MODES[task_mode]
104
+ return f"""---
105
+ tags:
106
+ - ocr
107
+ - document-processing
108
+ - falcon-ocr
109
+ - {task_mode}
110
+ - uv-script
111
+ - generated
112
+ ---
113
+
114
+ # Document Processing using Falcon OCR ({task_mode} mode)
115
+
116
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using [Falcon OCR](https://huggingface.co/tiiuae/Falcon-OCR), a 0.3B early-fusion vision-language model.
117
+
118
+ ## Processing Details
119
+
120
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
121
+ - **Model**: [{MODEL_ID}](https://huggingface.co/{MODEL_ID})
122
+ - **Task Mode**: `{task_mode}` - {task_description}
123
+ - **Number of Samples**: {num_samples:,}
124
+ - **Processing Time**: {processing_time}
125
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
126
+ - **Backend**: falcon-perception (OCRInferenceEngine)
127
+
128
+ ## Reproduction
129
+
130
+ ```bash
131
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/falcon-ocr.py \\
132
+ {source_dataset} \\
133
+ <output-dataset> \\
134
+ --task-mode {task_mode} \\
135
+ --image-column {image_column}
136
+ ```
137
+
138
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts)
139
+ """
140
+
141
+
142
+ def main(
143
+ input_dataset: str,
144
+ output_dataset: str,
145
+ image_column: str = "image",
146
+ task_mode: str = "plain",
147
+ hf_token: str = None,
148
+ split: str = "train",
149
+ max_samples: int = None,
150
+ private: bool = False,
151
+ shuffle: bool = False,
152
+ seed: int = 42,
153
+ output_column: str = "markdown",
154
+ config: str = None,
155
+ create_pr: bool = False,
156
+ compile: bool = True,
157
+ cudagraph: bool = True,
158
+ verbose: bool = False,
159
+ ):
160
+ check_cuda_availability()
161
+ start_time = datetime.now()
162
+
163
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
164
+ if HF_TOKEN:
165
+ login(token=HF_TOKEN)
166
+
167
+ if task_mode not in TASK_MODES:
168
+ raise ValueError(
169
+ f"Invalid task_mode '{task_mode}'. Choose from: {list(TASK_MODES.keys())}"
170
+ )
171
+
172
+ logger.info(f"Task mode: {task_mode} - {TASK_MODES[task_mode]}")
173
+ logger.info(f"Output column: {output_column}")
174
+
175
+ # Load dataset
176
+ logger.info(f"Loading dataset: {input_dataset}")
177
+ dataset = load_dataset(input_dataset, split=split)
178
+
179
+ if image_column not in dataset.column_names:
180
+ raise ValueError(
181
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
182
+ )
183
+
184
+ if shuffle:
185
+ logger.info(f"Shuffling dataset with seed {seed}")
186
+ dataset = dataset.shuffle(seed=seed)
187
+
188
+ if max_samples:
189
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
190
+ logger.info(f"Limited to {len(dataset)} samples")
191
+
192
+ # Load model using falcon-perception
193
+ logger.info(f"Loading model: {MODEL_ID} via falcon-perception engine")
194
+ from falcon_perception import load_and_prepare_model
195
+ from falcon_perception.data import ImageProcessor
196
+ from falcon_perception.paged_ocr_inference import OCRInferenceEngine
197
+
198
+ model, tokenizer, model_args = load_and_prepare_model(
199
+ hf_model_id=MODEL_ID,
200
+ device="cuda",
201
+ dtype="bfloat16",
202
+ compile=compile,
203
+ )
204
+
205
+ image_processor = ImageProcessor(patch_size=16, merge_size=1)
206
+ engine = OCRInferenceEngine(
207
+ model, tokenizer, image_processor, capture_cudagraph=cudagraph
208
+ )
209
+ logger.info(f"Engine loaded. compile={compile}, cudagraph={cudagraph}")
210
+
211
+ # Prepare all images
212
+ logger.info(f"Processing {len(dataset)} images...")
213
+ all_outputs = []
214
+
215
+ if task_mode == "layout":
216
+ # Process one at a time for layout (returns structured regions)
217
+ for i in tqdm(range(len(dataset)), desc="Falcon OCR (layout)"):
218
+ try:
219
+ pil_image = prepare_image(dataset[i][image_column])
220
+ results = engine.generate_with_layout(images=[pil_image], use_tqdm=False)
221
+ regions = results[0] if results else []
222
+ all_outputs.append(json.dumps(regions, ensure_ascii=False))
223
+ except Exception as e:
224
+ logger.error(f"Error processing image {i}: {e}")
225
+ all_outputs.append(f"[OCR ERROR: {str(e)[:200]}]")
226
+ else:
227
+ # Batch plain OCR for better throughput
228
+ batch_size = 8
229
+ for batch_start in tqdm(
230
+ range(0, len(dataset), batch_size), desc="Falcon OCR (plain)"
231
+ ):
232
+ batch_end = min(batch_start + batch_size, len(dataset))
233
+ batch_images = []
234
+ for i in range(batch_start, batch_end):
235
+ try:
236
+ batch_images.append(prepare_image(dataset[i][image_column]))
237
+ except Exception as e:
238
+ logger.error(f"Error preparing image {i}: {e}")
239
+ batch_images.append(Image.new("RGB", (100, 100)))
240
+
241
+ try:
242
+ texts = engine.generate_plain(
243
+ images=batch_images, use_tqdm=False
244
+ )
245
+ all_outputs.extend(texts)
246
+ except Exception as e:
247
+ logger.error(f"Error processing batch {batch_start}-{batch_end}: {e}")
248
+ all_outputs.extend(
249
+ [f"[OCR ERROR: {str(e)[:200]}]"] * len(batch_images)
250
+ )
251
+
252
+ # Calculate processing time
253
+ processing_duration = datetime.now() - start_time
254
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
255
+
256
+ # Add output column
257
+ logger.info(f"Adding '{output_column}' column to dataset")
258
+ dataset = dataset.add_column(output_column, all_outputs)
259
+
260
+ # Track inference info
261
+ inference_entry = {
262
+ "model_id": MODEL_ID,
263
+ "model_name": "Falcon-OCR",
264
+ "model_size": "0.3B",
265
+ "task_mode": task_mode,
266
+ "column_name": output_column,
267
+ "timestamp": datetime.now().isoformat(),
268
+ "backend": "falcon-perception",
269
+ }
270
+
271
+ if "inference_info" in dataset.column_names:
272
+ def update_inference_info(example):
273
+ try:
274
+ existing_info = (
275
+ json.loads(example["inference_info"])
276
+ if example["inference_info"]
277
+ else []
278
+ )
279
+ except (json.JSONDecodeError, TypeError):
280
+ existing_info = []
281
+ existing_info.append(inference_entry)
282
+ return {"inference_info": json.dumps(existing_info)}
283
+
284
+ dataset = dataset.map(update_inference_info)
285
+ else:
286
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
287
+ dataset = dataset.add_column("inference_info", inference_list)
288
+
289
+ # Push to hub
290
+ logger.info(f"Pushing to {output_dataset}")
291
+ max_retries = 3
292
+ for attempt in range(1, max_retries + 1):
293
+ try:
294
+ if attempt > 1:
295
+ logger.warning("Disabling XET (fallback to HTTP upload)")
296
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
297
+ dataset.push_to_hub(
298
+ output_dataset,
299
+ private=private,
300
+ token=HF_TOKEN,
301
+ max_shard_size="500MB",
302
+ **({"config_name": config} if config else {}),
303
+ create_pr=create_pr,
304
+ commit_message=f"Add {MODEL_ID} OCR results ({len(dataset)} samples)"
305
+ + (f" [{config}]" if config else ""),
306
+ )
307
+ break
308
+ except Exception as e:
309
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
310
+ if attempt < max_retries:
311
+ delay = 30 * (2 ** (attempt - 1))
312
+ logger.info(f"Retrying in {delay}s...")
313
+ time.sleep(delay)
314
+ else:
315
+ logger.error("All upload attempts failed. OCR results are lost.")
316
+ sys.exit(1)
317
+
318
+ # Create and push dataset card
319
+ logger.info("Creating dataset card")
320
+ card_content = create_dataset_card(
321
+ source_dataset=input_dataset,
322
+ task_mode=task_mode,
323
+ num_samples=len(dataset),
324
+ processing_time=processing_time_str,
325
+ image_column=image_column,
326
+ split=split,
327
+ )
328
+ card = DatasetCard(card_content)
329
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
330
+
331
+ logger.info("Falcon OCR processing complete!")
332
+ logger.info(
333
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
334
+ )
335
+ logger.info(f"Processing time: {processing_time_str}")
336
+ logger.info(
337
+ f"Speed: {len(dataset) / processing_duration.total_seconds():.2f} images/sec"
338
+ )
339
+
340
+ if verbose:
341
+ import importlib.metadata
342
+
343
+ logger.info("--- Resolved package versions ---")
344
+ for pkg in [
345
+ "falcon-perception", "transformers", "torch", "datasets", "pillow"
346
+ ]:
347
+ try:
348
+ logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
349
+ except importlib.metadata.PackageNotFoundError:
350
+ logger.info(f" {pkg}: not installed")
351
+
352
+
353
+ if __name__ == "__main__":
354
+ if len(sys.argv) == 1:
355
+ print("=" * 70)
356
+ print("Falcon OCR - 0.3B Document OCR (falcon-perception engine)")
357
+ print("=" * 70)
358
+ print(f"\nModel: {MODEL_ID}")
359
+ print("License: Apache 2.0")
360
+ print("\nTask Modes:")
361
+ for mode, description in TASK_MODES.items():
362
+ print(f" {mode:10} - {description}")
363
+ print("\nExamples:")
364
+ print(" uv run falcon-ocr.py input-dataset output-dataset")
365
+ print(" uv run falcon-ocr.py dense-docs output --task-mode layout")
366
+ print("\nFor full help: uv run falcon-ocr.py --help")
367
+ sys.exit(0)
368
+
369
+ parser = argparse.ArgumentParser(
370
+ description="Document OCR using Falcon OCR (0.3B, falcon-perception engine)",
371
+ formatter_class=argparse.RawDescriptionHelpFormatter,
372
+ epilog=__doc__,
373
+ )
374
+
375
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
376
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
377
+ parser.add_argument(
378
+ "--image-column", default="image",
379
+ help="Column containing images (default: image)",
380
+ )
381
+ parser.add_argument(
382
+ "--task-mode", choices=list(TASK_MODES.keys()), default="plain",
383
+ help="Task type: plain (default), layout",
384
+ )
385
+ parser.add_argument("--hf-token", help="Hugging Face API token")
386
+ parser.add_argument(
387
+ "--split", default="train", help="Dataset split (default: train)",
388
+ )
389
+ parser.add_argument(
390
+ "--max-samples", type=int,
391
+ help="Maximum number of samples to process (for testing)",
392
+ )
393
+ parser.add_argument(
394
+ "--private", action="store_true", help="Make output dataset private",
395
+ )
396
+ parser.add_argument(
397
+ "--shuffle", action="store_true", help="Shuffle dataset before processing",
398
+ )
399
+ parser.add_argument(
400
+ "--seed", type=int, default=42, help="Random seed for shuffling (default: 42)",
401
+ )
402
+ parser.add_argument(
403
+ "--output-column", default="markdown",
404
+ help="Column name for output text (default: markdown)",
405
+ )
406
+ parser.add_argument(
407
+ "--config",
408
+ help="Config/subset name for Hub (for benchmarking multiple models)",
409
+ )
410
+ parser.add_argument(
411
+ "--create-pr", action="store_true",
412
+ help="Create a pull request instead of pushing directly",
413
+ )
414
+ parser.add_argument(
415
+ "--no-compile", action="store_true",
416
+ help="Disable torch.compile",
417
+ )
418
+ parser.add_argument(
419
+ "--no-cudagraph", action="store_true",
420
+ help="Disable CUDA graph capture",
421
+ )
422
+ parser.add_argument(
423
+ "--verbose", action="store_true", help="Log resolved package versions",
424
+ )
425
+
426
+ args = parser.parse_args()
427
+
428
+ main(
429
+ input_dataset=args.input_dataset,
430
+ output_dataset=args.output_dataset,
431
+ image_column=args.image_column,
432
+ task_mode=args.task_mode,
433
+ hf_token=args.hf_token,
434
+ split=args.split,
435
+ max_samples=args.max_samples,
436
+ private=args.private,
437
+ shuffle=args.shuffle,
438
+ seed=args.seed,
439
+ output_column=args.output_column,
440
+ config=args.config,
441
+ create_pr=args.create_pr,
442
+ compile=not args.no_compile,
443
+ cudagraph=not args.no_cudagraph,
444
+ verbose=args.verbose,
445
+ )