| --- |
| language: |
| - en |
| license: mit |
| task_categories: |
| - automatic-speech-recognition |
| size_categories: |
| - 10K<n<100K |
| dataset_info: |
| features: |
| - name: audio |
| dtype: |
| audio: |
| sampling_rate: 16000 |
| - name: text |
| dtype: string |
| - name: chunk_id |
| dtype: string |
| - name: original_file |
| dtype: string |
| - name: start_time |
| dtype: float64 |
| - name: end_time |
| dtype: float64 |
| - name: speaker_id |
| dtype: string |
| - name: domain |
| dtype: string |
| - name: noise_conditions |
| dtype: string |
| - name: language |
| dtype: string |
| splits: |
| - name: train_0 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_1 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_2 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_3 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_4 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_5 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_6 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_7 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_8 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_9 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_10 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| - name: train_11 |
| num_bytes: 14241773830.618 |
| num_examples: 36239 |
| download_size: 161139820068 |
| dataset_size: 170901285967.41602 |
| configs: |
| - config_name: default |
| data_files: |
| - split: train_0 |
| path: data/train_0-* |
| - split: train_1 |
| path: data/train_1-* |
| - split: train_2 |
| path: data/train_2-* |
| - split: train_3 |
| path: data/train_3-* |
| - split: train_4 |
| path: data/train_4-* |
| - split: train_5 |
| path: data/train_5-* |
| - split: train_6 |
| path: data/train_6-* |
| - split: train_7 |
| path: data/train_7-* |
| - split: train_8 |
| path: data/train_8-* |
| - split: train_9 |
| path: data/train_9-* |
| - split: train_10 |
| path: data/train_10-* |
| - split: train_11 |
| path: data/train_11-* |
| --- |
| |
| # Streaming ASR Dataset |
|
|
| This dataset is designed for training real-time (streaming) ASR models, with a focus on handling chunk-based audio processing. It contains standardized audio segments from LibriSpeech dev-clean, processed for streaming ASR applications. |
|
|
| ## Dataset Description |
|
|
| ### Dataset Summary |
| - Source: LibriSpeech dev-clean |
| - Total chunks: 2,703 |
| - Total duration: ~20 hours (1,212.26 seconds) |
| - Unique speakers: 40 |
| - Audio format: 16 kHz mono WAV |
| - Language: English |
| - Domain: Audiobooks (clean speech) |
|
|
| ### Dataset Structure |
| ``` |
| openwhisper/ |
| ├── chunks/ # Audio files (16kHz mono WAV) |
| ├── transcripts/ # Text transcriptions |
| └── metadata/ # JSON files with detailed information |
| ``` |
|
|
| ### Data Fields |
| Each sample consists of: |
| 1. Audio file (WAV) |
| - 16 kHz sampling rate |
| - Mono channel |
| - 16-bit PCM format |
|
|
| 2. Transcript file (TXT) |
| - Clean text transcription |
| - Includes punctuation and casing |
| - Aligned with audio chunks |
|
|
| 3. Metadata file (JSON) |
| - speaker_id: Unique speaker identifier |
| - chunk_id: Unique chunk identifier |
| - start_time: Start time in original audio |
| - end_time: End time in original audio |
| - duration: Chunk duration in seconds |
| - language: Language code (en) |
| - noise_conditions: Audio quality label (clean) |
| - original_file: Source file reference |
|
|
| ### Data Splits |
| This dataset contains only the dev-clean portion of LibriSpeech, processed into overlapping chunks suitable for streaming ASR training. |
|
|
| ## Dataset Creation |
|
|
| ### Preprocessing |
| 1. Audio standardization |
| - Resampling to 16 kHz |
| - Conversion to mono channel |
| - Format conversion to WAV |
|
|
| 2. Chunking strategy |
| - Fixed chunk duration with overlap |
| - Natural pause boundary detection |
| - Consistent chunk size for training stability |
|
|
| 3. Transcript processing |
| - Alignment with audio chunks |
| - Preservation of punctuation and casing |
| - Clean text normalization |
|
|
| ## Usage |
|
|
| ### Loading the Dataset |
| ```python |
| from datasets import load_dataset |
| |
| dataset = load_dataset("orgho98/openwhisper") |
| ``` |
|
|
| ### Training Example |
| ```python |
| # Example code for loading audio and transcript pairs |
| for sample in dataset: |
| audio = sample['audio'] |
| transcript = sample['text'] |
| metadata = sample['metadata'] |
| |
| # Process for streaming ASR training |
| # ... |
| ``` |
|
|
| ## License |
| This dataset is released under the MIT License, following LibriSpeech's licensing terms. |
|
|
| ## Citation |
| If you use this dataset, please cite: |
| ```bibtex |
| @misc{openwhisper2024, |
| title={Streaming ASR Dataset}, |
| author={Automagically AI}, |
| year={2024}, |
| publisher={Hugging Face}, |
| howpublished={\url{https://huggingface.co/datasets/orgho98/openwhisper}} |
| } |
| ``` |
|
|
| ## Limitations |
| - Limited to clean speech from audiobooks |
| - Single language (English) |
| - May not represent real-world streaming conditions perfectly |
|
|
| ## Additional Information |
| - **Curated by:** Automagically AI |
| - **License:** MIT |
| - **Version:** 1.0.0 |
|
|