Whisper whisper-small lwazi multilingual
This model is a fine-tuned version of openai/whisper-small on the Lwazi_asr_multilingual dataset. It achieves the following results on the evaluation set:
- Loss: 0.3685
- Wer Ortho: 36.1025
- Wer: 36.1358
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 150
- training_steps: 2000
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 1.0123 | 0.4237 | 250 | 0.9345 | 82.7840 | 82.9039 |
| 0.7055 | 0.8475 | 500 | 0.6634 | 69.5651 | 69.6134 |
| 0.4688 | 1.2712 | 750 | 0.5473 | 60.8014 | 60.8181 |
| 0.4106 | 1.6949 | 1000 | 0.4685 | 54.4493 | 54.4843 |
| 0.2469 | 2.1186 | 1250 | 0.4259 | 48.8100 | 48.9050 |
| 0.2409 | 2.5424 | 1500 | 0.3983 | 45.8038 | 45.8638 |
| 0.2243 | 2.9661 | 1750 | 0.3706 | 37.1201 | 37.1684 |
| 0.1224 | 3.3898 | 2000 | 0.3685 | 36.1025 | 36.1358 |
Framework versions
- Transformers 4.52.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.4
- Downloads last month
- 11
Model tree for UnarineLeo/whisper-small-lwazi-multilingual
Base model
openai/whisper-smallEvaluation results
- Wer on Lwazi_asr_multilingualself-reported36.136