nb-asr-north-saami-parakeet-v6-tokenremap-original

Fine-tuned NeMo Parakeet TDT model for North Saami ASR.

Run Info

Validation Metrics During Training

step val_loss val_wer
500 1.252147 0.573590
1000 0.982333 0.487658
1500 0.798937 0.407285
2000 0.771910 0.378387
2500 0.806764 0.366346
3000 0.815265 0.351294
3500 0.887531 0.360325
4000 0.893954 0.346779
4500 0.957318 0.351294
5000 0.984869 0.350391
5500 1.010145 0.335039
6000 1.036391 0.337748
6500 1.100679 0.342866
7000 1.080392 0.331126
7500 1.127530 0.324202
8000 1.172381 0.325105
8500 1.169507 0.335340
9000 1.226594 0.335340
9500 1.244184 0.326309
10000 1.302105 0.322697
10500 1.365594 0.319386
11000 1.367722 0.306743
11500 1.391324 0.313967
12000 1.409990 0.307345
12500 1.431023 0.310054
13000 1.443674 0.310054
13500 1.465230 0.310656
14000 1.479429 0.311258
  • Best during training: step 11000, val_loss 1.367722, val_wer 0.306743
  • Final during training: step 14000, val_loss 1.479429, val_wer 0.311258

Final Evaluation Metrics

split loss wer
validation 1.479320 0.310355
test 2.656427 0.477638

Test WER Comparison

model test_wer_raw test_wer_normalized
Parakeet (this repo) 0.477115 0.472204
NbAiLab/whisper-large-sme 0.598327 0.597697
  • Delta (Whisper - Parakeet): raw 0.121212, normalized 0.125493
  • Normalization for test_wer_normalized: drop_if_contains=['_', ':'], remove_chars=['*', '[', ']', '"', '(', ')']

Notes

  • Validation/test metrics above are computed post-training from the exported .nemo model.
  • Training validation table is parsed from logged Validation metrics logged to wandb entries.
  • Whisper comparison uses scripts/compare_test_wer_models.py on the same test manifest.
Downloads last month
365
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for NbAiLab/nb-asr-north-saami-parakeet-v6-tokenremap-original

Finetuned
(35)
this model