Datasets:
audio audioduration (s) 6.24 10.8 | expected_output stringlengths 73 145 | model_output stringlengths 69 146 | category stringclasses 8
values | notes stringlengths 78 140 | source stringclasses 3
values | wer float64 0.05 0.33 | cer float64 0.01 0.19 |
|---|---|---|---|---|---|---|---|
The annual budget meeting will be held in conference room B at three thirty PM. | The annual budget meeting will be held in conference roomy at 3.30pm. | noisy_background | White noise at SNR -5dB (severe noise); ASR models degrade sharply below SNR 5dB | synthetic_augmentation | 0.3333 | 0.1899 | |
Yeh wala restaurant bahut accha hai, the butter chicken and naan are absolutely amazing. | Yewala restaurant Bahut Atshahe, the butter chicken and naan are absolutely amazing. | code_switched_hindi_english | Hindi-English code-switching: 'Yeh wala restaurant bahut accha hai' = 'This restaurant is very good' | edge-tts (en-IN-PrabhatNeural) | 0.2857 | 0.0795 | |
Dr. Chandrasekhar Raghunathan from AIIMS Bhubaneswar presented at the Jawaharlal Nehru Medical College symposium. | Doctor Chandrasekhar Raghunathan from Aims, Ghuvanish were presented at the Jawaharlal Nehru Medical College Symposium. | indian_proper_nouns | Indian institutional names (AIIMS, Jawaharlal Nehru) and personal names with complex phonology | edge-tts (en-IN-PrabhatNeural) | 0.2857 | 0.1239 | |
Post cholecystectomy, the patient developed iatrogenic bile duct injury requiring hepaticojejunostomy. | Post-cholecystectomy, the patient developed iatrogenic bile duct injury requiring hepaticogynostomy. | medical_terminology | Surgical terminology: cholecystectomy, iatrogenic, hepaticojejunostomy -- rare words unlikely in training data | edge-tts (en-IN-PrabhatNeural) | 0.2727 | 0.049 | |
Office mein aaj bohot kaam tha, so I left late and missed the last metro. | Office may aj bohut kam tha, so I left late and missed the last metro. | code_switched_hindi_english | Hindi-English code-switching: 'Office mein aaj bohot kaam tha' = 'There was a lot of work in office today' | edge-tts (en-IN-PrabhatNeural) | 0.2667 | 0.0822 | |
The patient presented with acute dyspnea and was prescribed azithromycin five hundred milligrams for bronchopneumonia. | The patient presented with acute dyspnea, and was prescribed azithromycin 500 mg for bronchopneumonia. | medical_terminology | Complex pharmaceutical names (azithromycin, bronchopneumonia) and clinical terms (dyspnea) | edge-tts (en-IN-PrabhatNeural) | 0.2667 | 0.178 | |
Mujhe lagta hai ki this project will be completed by next month, but humein extra resources chahiye. | Muje lakta hai ki This project will be completed by next month, but the extra resources chahi. | code_switched_hindi_english | Hindi-English code-switching: 'Mujhe lagta hai ki' = 'I think that', 'humein extra resources chahiye' = 'we need extra resources' | edge-tts (en-IN-PrabhatNeural) | 0.2353 | 0.09 | |
Shri Venkateshwara Subramanian flew from Thiruvananthapuram to Visakhapatnam via Coimbatore on Air India. | Sri Venkateshwar Subramanian flew from Thiruvananthapuram to Vishakhapatnam via Coimbatore on Air India. | indian_proper_nouns | Complex Indian proper nouns: personal name (Venkateshwara Subramanian) and South Indian city names | edge-tts (en-IN-PrabhatNeural) | 0.2308 | 0.0286 | |
The annual budget meeting will be held in conference room B at three thirty PM. | The annual budget meeting will be held in Conference Room B at 3.30pm. | noisy_background | White noise at SNR 0dB (heavy noise); ASR models degrade sharply below SNR 5dB | synthetic_augmentation | 0.2 | 0.1646 | |
Good evening sir, welcome to Taj Palace Hotel. Your deluxe suite with complimentary breakfast and airport transfer is confirmed for three nights. | Good evening, sir. Welcome to Taj Palace Hotel. Your deluxe suite with complimentary breakfast and airport transfer is confirmed for three nights. | hospitality_domain | Indian hospitality domain: hotel check-in with proper nouns (Taj Palace) and service terminology | edge-tts (en-IN-NeerjaNeural) | 0.0909 | 0.0138 | |
The property is valued at twenty five lakh rupees and the EMI is forty two thousand three hundred and fifty rupees per month. | The property is valued at twenty-five lakh rupees and the emi is forty two thousand three hundred and fifty rupees per month. | indian_numbers_dates | Indian numbering system: 'lakh' (100,000) is standard in India but absent from Western English training data | edge-tts (en-IN-PrabhatNeural) | 0.087 | 0.008 | |
The meeting is scheduled for fifteenth August twenty twenty six at fourteen hundred hours IST. | The meeting is scheduled for fifteenth August twenty twenty six at fourteen hundred Rs Ist. | indian_numbers_dates | Indian date convention and IST timezone; 'fifteenth August' is India's Independence Day | edge-tts (en-IN-PrabhatNeural) | 0.0667 | 0.0319 | |
The panchayat decided that the anganwadi workers would distribute the aadhaar forms during the gram sabha meeting. | The panchayat decided that their Anganwadi workers would distribute the Aadhaar forms during the Gram Sabha meeting. | regional_language_words | Indian administrative terms: panchayat (village council), anganwadi (childcare center), aadhaar (national ID), gram sabha (village assembly) | edge-tts (en-IN-NeerjaNeural) | 0.0588 | 0.0175 | |
So I was, um, I was thinking that, uh, maybe we should, you know, reconsider the the proposal before the deadline. | So I was, um, I was thinking that, ah, maybe we should, you know, reconsider the the proposal before the deadline. | disfluent_speech | Disfluent speech with fillers (um, uh, you know) and repetitions (I was...I was, the the); tests model's handling of non-fluent speech | edge-tts (en-IN-PrabhatNeural) | 0.0476 | 0.0088 |
Blind Spots of nvidia/parakeet-tdt-0.6b-v2
This dataset documents 14 systematically identified blind spots in NVIDIA's parakeet-tdt-0.6b-v2 automatic speech recognition model. The errors span 8 distinct categories and reveal a consistent pattern: the model struggles with inputs outside the distribution of its Western English-centric training data.
Model Under Test
| Property | Value |
|---|---|
| Model | nvidia/parakeet-tdt-0.6b-v2 |
| Parameters | 600M |
| Architecture | FastConformer encoder + TDT (Token-and-Duration Transducer) decoder |
| Training data | ~120,000 hours English speech (Granary dataset) |
| Reported Avg WER | 6.05% on HF Open ASR Leaderboard |
| License | CC-BY-4.0 |
How the Model Was Loaded
Hardware: NVIDIA L4 GPU (23GB VRAM), Python 3.11, PyTorch 2.10, CUDA 12.8
pip install nemo_toolkit[asr] soundfile librosa
import nemo.collections.asr as nemo_asr
from omegaconf import OmegaConf
# Load the model
asr_model = nemo_asr.models.ASRModel.from_pretrained("nvidia/parakeet-tdt-0.6b-v2")
asr_model = asr_model.to("cuda")
asr_model.eval()
# Use greedy decoding (non-batch) to avoid CUDA graph compatibility issues
decoding_cfg = asr_model.cfg.decoding
decoding_cfg.strategy = "greedy"
OmegaConf.update(decoding_cfg, "greedy.max_symbols", 10, force_add=True)
asr_model.change_decoding_strategy(decoding_cfg)
# Transcribe
output = asr_model.transcribe(["path/to/audio.wav"])
print(output[0].text)
Note on CUDA Graph Compatibility
With PyTorch 2.10+ and CUDA 12.8, the default greedy_batch decoding strategy triggers a CUDA graph compilation error due to a mismatch in cudaStreamGetCaptureInfo return values. Switching to greedy (non-batch) decoding resolves this without affecting output quality.
Dataset Schema
Each row contains:
| Column | Type | Description |
|---|---|---|
audio |
Audio (16kHz) | The input audio waveform |
expected_output |
string | The correct/ground-truth transcription |
model_output |
string | What parakeet-tdt-0.6b-v2 produced |
category |
string | Blind spot category |
notes |
string | Explanation of why this is an error |
source |
string | How the audio was generated/sourced |
wer |
float | Word Error Rate for this sample |
cer |
float | Character Error Rate for this sample |
Blind Spot Categories and Findings
Summary
| Category | Samples | Avg WER | Key Finding |
|---|---|---|---|
| Code-switched Hindi-English | 3 | 26.3% | Hindi words systematically garbled or dropped |
| Medical terminology | 2 | 27.0% | Rare clinical terms mangled; number format normalization |
| Indian proper nouns | 2 | 25.8% | Names and institutions misspelled or destroyed |
| Noisy background | 2 | 26.7% | Word substitutions and format changes under noise |
| Hospitality domain | 1 | 9.1% | Minor punctuation/formatting changes |
| Indian numbers & dates | 2 | 7.7% | "IST" misrecognized; hyphenation changes |
| Regional language words | 1 | 5.9% | Determiners substituted ("the" → "their") |
| Disfluent speech | 1 | 4.8% | Filler words altered ("uh" → "ah") |
Detailed Error Analysis
1. Code-Switched Hindi-English (WER: 23–29%)
The most severe blind spot. When Hindi and English are mixed in the same utterance, the model:
- Misspells Hindi words using English phonetic approximations: "Mujhe" → "Muje", "lagta" → "lakta", "bohot" → "bohut"
- Drops Hindi words entirely: "humein extra resources chahiye" → "the extra resources chahi" (lost "humein")
- Merges words: "Yeh wala" → "Yewala", "accha hai" → "Atshahe"
This is expected: the model was trained on ~120K hours of monolingual English. Hindi phonemes have no representation in its 1024-token BPE vocabulary.
2. Medical Terminology (WER: 27%)
- Rare medical terms mangled: "hepaticojejunostomy" → "hepaticogynostomy" (surgical term absent from training data)
- Number format normalization: "five hundred milligrams" → "500 mg" (the model learned to normalize spoken numbers to digits, which is technically correct but deviates from verbatim transcription)
3. Indian Proper Nouns (WER: 23–29%)
- Spelling variations on names: "Shri Venkateshwara" → "Sri Venkateshwar", "Visakhapatnam" → "Vishakhapatnam"
- Catastrophic errors on institutions: "AIIMS Bhubaneswar" → "Aims, Ghuvanish were" — the acronym AIIMS (All India Institute of Medical Sciences) is destroyed, and "Bhubaneswar" becomes unrecognizable
4. Noisy Background (WER: 20–33%)
At SNR -5dB (severe noise):
- Word substitutions: "conference room B" → "conference roomy"
- The model degrades gracefully under moderate noise (SNR 5dB) but breaks down rapidly below SNR 0dB
5–8. Other Categories
- Indian numbers/dates: "hours IST" → "Rs Ist" (IST timezone confused with currency abbreviation)
- Hospitality: Minor punctuation reformatting
- Regional words: "the anganwadi" → "their Anganwadi" (determiner substitution)
- Disfluent speech: "uh" → "ah" (filler word approximation)
Audio Generation Methodology
Test audio was generated using:
- Edge-TTS with Indian English voices (
en-IN-PrabhatNeural,en-IN-NeerjaNeural) for natural-sounding Indian English speech - Synthetic augmentation (additive white noise at various SNR levels, time-stretching) applied to clean TTS audio
- All audio is 16kHz mono WAV format, matching the model's expected input
Note: Using TTS-generated audio rather than real human recordings means the accent patterns are somewhat idealized. Real Indian-accented English speakers would likely produce even higher error rates due to greater phonetic variation.
Recommended Fine-Tuning Dataset
To fix these blind spots, the model should be fine-tuned on a dataset combining:
Sources
Indian-accented English (~200 hours):
- IndicAccentDb: 8,116 recordings across 6 Indian accent varieties
- Svarah: 9.6 hours from 117 speakers across 65 Indian locations
- CommonVoice Indian English: 163 hours
Code-switched Hindi-English (~100 hours):
- MUCS Challenge Data: ~600 hours including Hindi-English code-switching
- IndicVoices: 23.7K hours across 22 Indian languages (subset with code-switching)
Domain-specific English (~100 hours):
- Medical dictation corpora (e.g., from clinical NLP datasets)
- Indian institutional names and geographic terms (can be sourced from Indian news broadcast transcripts)
Noisy speech augmentations:
- Apply MUSAN noise at SNR 0–10dB to clean training data
- Room impulse response simulation for reverberant conditions
Estimated Dataset Size
- Minimum viable: ~200 hours of targeted data (accented English + code-switched) would significantly reduce WER on these categories
- Recommended: ~500–1,000 hours combining all sources above, with noise augmentation applied to 30% of samples
- Rationale: The base model was trained on 120K hours; even 0.5% of that volume (600 hours) of high-quality targeted data has been shown to substantially improve domain adaptation in ASR models (see Whisper fine-tuning literature)
How to Assemble
- Download and filter existing HF datasets for Indian English content
- Apply text normalization to unify transcription conventions
- Use speed perturbation (0.9x–1.1x) and noise augmentation for robustness
- Fine-tune using NeMo's ASR fine-tuning pipeline with a reduced learning rate (1e-5 to 5e-5)
Reproduction
All code used to generate this dataset is included in the companion notebook and scripts:
scripts/source_audio.py— audio generation and sourcingscripts/run_inference.py— model inference and WER computationscripts/build_hf_dataset.py— HuggingFace dataset construction
Citation
If you use this dataset, please cite:
@dataset{parakeet_blind_spots_2026,
title={Blind Spots of nvidia/parakeet-tdt-0.6b-v2: Indian English and Code-Switching},
year={2026},
url={https://huggingface.co/datasets/TieIncred/parakeet-tdt-blind-spots}
}
- Downloads last month
- 24