--- dataset_info: config_name: default features: - name: image dtype: image: decode: false - name: text dtype: string - name: window_size dtype: int64 - name: window_index dtype: int64 - name: line_ids dtype: string - name: line_reading_order dtype: string - name: filename dtype: string - name: project_name dtype: string splits: - name: train num_examples: 82 num_bytes: 344898560 download_size: 344898560 dataset_size: 344898560 configs: - config_name: default data_files: - split: train path: data/train/**/*.parquet tags: - image-to-text - htr - trocr - transcription - pagexml license: mit --- # Dataset Card for window-test-cli This dataset was created using pagexml-hf converter from Transkribus PageXML data. ## Dataset Summary This dataset contains 82 samples across 1 split(s). ## Dataset Structure ### Data Splits - **train**: 82 samples ### Dataset Size - Approximate total size: 328.92 MB - Total samples: 82 ### Features - **image**: `Image(mode=None, decode=False)` - **text**: `Value('string')` - **window_size**: `Value('int64')` - **window_index**: `Value('int64')` - **line_ids**: `Value('string')` - **line_reading_order**: `Value('string')` - **filename**: `Value('string')` - **project_name**: `Value('string')` ## Data Organization Data is organized as parquet shards by split and project: ``` data/ ├── / │ └── / │ └── -.parquet ``` The HuggingFace Hub automatically merges all parquet files when loading the dataset. ## Usage ```python from datasets import load_dataset # Load entire dataset dataset = load_dataset("jwidmer/window-test-cli") # Load specific split train_dataset = load_dataset("jwidmer/window-test-cli", split="train") ``` ### Projects Included 1505-02-10_Hanserezess,_Lübeck_Dienstag_nach_Scholastice_1505_(SAHST_Rep__2,_I_040-4)