The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: list_() takes at least 1 positional argument (0 given)
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2027, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2016, in from_yaml_inner
Value(obj["dtype"])
File "<string>", line 5, in __init__
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 540, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 150, in string_to_arrow
return pa.__dict__[datasets_dtype + "_"]()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/types.pxi", line 4942, in pyarrow.lib.list_
TypeError: list_() takes at least 1 positional argument (0 given)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Samaritan Hebrew OCR Dataset
Dataset Summary
The Samaritan Hebrew OCR Dataset is a specialized dataset for fine-tuning OCR models on Samaritan Hebrew manuscripts. This dataset contains 46,860 annotated samples extracted from 1,374 manuscript pages, converted from PAGE-XML format to the LightOnOCR-2 training format.
The dataset includes three types of samples:
- Line-level samples: Individual textlines cropped using precise polygon masks (40,219 samples)
- Paragraph-level samples: Groups of 5-10 consecutive lines merged into surrounding polygons (5,267 samples)
- Full-page samples: Complete manuscript pages with full transcriptions (1,374 samples)
This mixed-content approach provides diverse training examples at different granularities, similar to datasets like IAM, enabling robust OCR model training.
Dataset Structure
Data Fields
Each sample in the dataset contains the following fields:
images(List[Image]): A list containing a single PIL Image object representing the cropped image region (line, paragraph, or full page)texts(List[Dict]): A list containing a conversation-style dictionary with:user: Prompt/question (typically empty or ignored during training)assistant: The ground truth transcription text (Unicode-normalized Hebrew text)
source(string): Source file identifier (XML filename)type(string): Sample type indicator ("line","paragraph", or"page")
Data Splits
The dataset is split into three subsets:
| Split | Samples | Percentage | Description |
|---|---|---|---|
| Train | 39,831 | 85.0% | Training data for model fine-tuning |
| Validation | 4,686 | 10.0% | Validation data for hyperparameter tuning and early stopping |
| Test | 2,343 | 5.0% | Test data for final model evaluation |
Total Samples: 46,860
Dataset Details
Dataset Description
This dataset was created from Samaritan Hebrew manuscripts that were originally transcribed in Samaritan script but have been transliterated to Hebrew characters. The manuscripts span various historical periods and contain diverse textual content.
Source Data
- Format: PAGE-XML (Prima Research PAGE format)
- Source Files: 1,374 XML annotation files with corresponding image files
- Image Formats: JPG, PNG (various formats supported)
- Annotation Format: Textline polygons with Unicode transcriptions
Preprocessing
The dataset underwent the following preprocessing steps:
- XML Parsing: Extracted textline polygons and transcriptions from PAGE-XML files
- Polygon-Based Cropping: Images were cropped using precise polygon masks (not bounding boxes) to handle curved textlines accurately
- Unicode Normalization: All transcriptions were normalized to NFC (Canonical Composition) form, which is recommended for Hebrew text
- Sample Generation:
- Line samples: Each textline polygon was cropped individually
- Paragraph samples: Groups of 5-10 consecutive lines were merged using convex hull algorithms to create surrounding polygons
- Page samples: Full pages were included with all transcriptions joined by newlines
- Memory-Optimized Processing: Dataset was processed in batches to handle large-scale conversion efficiently
Dataset Statistics
- Total Manuscript Pages: 1,374
- Total Samples: 46,860
- Line-level: 40,219 (85.8%)
- Paragraph-level: 5,267 (11.2%)
- Full-page: 1,374 (2.9%)
- Average Lines per Page: ~29.3 lines
- Unicode Normalization: NFC (Canonical Composition)
Languages
- Primary Language: Hebrew (transliterated from Samaritan script)
- Script: Hebrew alphabet
- Text Direction: Right-to-left (RTL)
Dataset Creation
Curation Rationale
This dataset was created to enable fine-tuning of OCR models specifically for Samaritan Hebrew manuscripts. The mixed-content approach (lines, paragraphs, and full pages) provides diverse training examples that help models learn to handle:
- Individual textlines with varying curvatures
- Multi-line contexts (paragraphs)
- Full-page layouts with complex formatting
Source Data Collection
The source data consists of aligned manuscript images with PAGE-XML annotations. The original manuscripts are historical Samaritan Hebrew texts that have been digitized and manually annotated.
Annotation Process
Annotations were provided in PAGE-XML format with:
- Precise polygon coordinates for each textline
- Unicode transcriptions for each textline
- Page-level metadata
Personal and Sensitive Information
This dataset contains historical manuscript transcriptions. No modern personal information is included.
Considerations for Using the Data
Intended Use
This dataset is intended for:
- Fine-tuning OCR models (specifically LightOnOCR-2) for Samaritan Hebrew manuscripts
- Research in historical document digitization
- OCR model evaluation and benchmarking
- Training models for Hebrew script recognition
Out-of-Scope Use
This dataset may not be suitable for:
- Modern Hebrew text recognition (different script characteristics)
- Other Semitic languages without adaptation
- General-purpose OCR without fine-tuning
Known Limitations
- Script Specificity: The dataset is specifically designed for Samaritan Hebrew manuscripts transliterated to Hebrew characters
- Historical Content: All samples are from historical manuscripts, which may have different characteristics than modern printed text
- Mixed Quality: Manuscript images may vary in quality, resolution, and preservation state
- Transliteration: The text has been transliterated from Samaritan script to Hebrew, which may introduce some variation
Bias and Fairness
- The dataset represents historical manuscripts and may reflect historical biases present in the source materials
- The dataset focuses on a specific script and time period, limiting generalizability
Social Impact
This dataset supports:
- Positive Impact: Preservation and digitization of historical manuscripts, making them more accessible for research
- Research: Enabling computational analysis of historical Hebrew texts
- Cultural Heritage: Contributing to the preservation of Samaritan Hebrew cultural heritage
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("your-username/samaritan_hebrew_dataset")
# Access splits
train_dataset = dataset["train"]
val_dataset = dataset["validation"]
test_dataset = dataset["test"]
# Example: Access a sample
sample = train_dataset[0]
image = sample["images"][0] # PIL Image
transcription = sample["texts"][0]["assistant"] # Ground truth text
sample_type = sample["type"] # "line", "paragraph", or "page"
Using with LightOnOCR-2 Fine-tuning
This dataset is designed to work with the LightOnOCR-2 fine-tuning scripts:
from datasets import load_dataset
from transformers import LightOnOcrProcessor, LightOnOcrForConditionalGeneration
# Load dataset
dataset = load_dataset("your-username/samaritan_hebrew_dataset")
# Load processor and model
processor = LightOnOcrProcessor.from_pretrained("lightonai/LightOnOCR-2-1B-base")
model = LightOnOcrForConditionalGeneration.from_pretrained("lightonai/LightOnOCR-2-1B-base")
# Process samples (example)
sample = dataset["train"][0]
image = sample["images"][0]
text = sample["texts"][0]["assistant"]
# Prepare for training
inputs = processor(images=image, text=text, return_tensors="pt")
Filtering by Sample Type
You can filter samples by type if needed:
# Filter only line-level samples
line_samples = dataset["train"].filter(lambda x: x["type"] == "line")
# Filter only paragraph samples
paragraph_samples = dataset["train"].filter(lambda x: x["type"] == "paragraph")
# Filter only full-page samples
page_samples = dataset["train"].filter(lambda x: x["type"] == "page")
Additional Information
Dataset Format
- Storage Format: Apache Arrow (
.arrowfiles) - Framework: HuggingFace
datasetslibrary - Image Encoding: PIL Image objects stored in Arrow format
Dataset Version
- Version: 1.0.0
- Created: 2026-01-22
- Last Updated: 2026-01-22
Related Datasets
- LightOnOCR-2-1B-base: Base model for fine-tuning
- IAM Dataset: Similar mixed-content OCR dataset
Conversion Script
The dataset was created using the convert_pagexml_to_lightonocr.py script, which:
- Supports both PAGE-XML and ALTO-XML formats
- Implements polygon-based cropping for curved textlines
- Generates mixed-content samples (lines, paragraphs, pages)
- Applies Unicode normalization (NFC)
- Uses memory-efficient batch processing
For more information on dataset creation, see the project repository.
Citation
If you use this dataset in your research, please cite:
@dataset{samaritan_hebrew_LightOnOcr,
title={Samaritan Hebrew OCR Dataset},
author={John Locke},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/samaritan-ai/samaritan_hebrew_LightOnOcr}
}
License
[Please specify the license for your dataset. Common options include:]
- CC0: Public Domain
- CC-BY-4.0: Creative Commons Attribution 4.0
- CC-BY-SA-4.0: Creative Commons Attribution-ShareAlike 4.0
- Custom: Specify your custom license terms
Note: Please ensure you have the right to distribute the source manuscript images and transcriptions under your chosen license.
Acknowledgments
- Source manuscript images and annotations: [Specify source/collection]
- PAGE-XML format: Prima Research
- LightOnOCR-2 model: LightOn AI
Contact
For questions, issues, or contributions, please:
- Open an issue on the dataset repository
- Contact: [johnlockejrr]
Dataset Card Metadata
Task: Optical Character Recognition (OCR)
Language: Hebrew (transliterated from Samaritan script)
Multimodal: Yes (Image + Text)
Size: 46,860 samples
Splits: Train (85%), Validation (10%), Test (5%)
Format: HuggingFace datasets (Arrow format)
- Downloads last month
- 132