whisper-small-amh-matewosx
This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1151
- Wer: 0.4966
- Cer: 0.3684
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|---|---|---|---|---|---|
| 0.4046 | 0.4312 | 500 | 0.1753 | 0.6207 | 0.4243 |
| 0.3112 | 0.8624 | 1000 | 0.1247 | 0.5390 | 0.3838 |
| 0.2479 | 1.2932 | 1500 | 0.1140 | 0.5178 | 0.3754 |
| 0.2359 | 1.7245 | 2000 | 0.1096 | 0.5081 | 0.3697 |
| 0.1810 | 2.1552 | 2500 | 0.1057 | 0.5026 | 0.3678 |
| 0.1960 | 2.5865 | 3000 | 0.1043 | 0.4983 | 0.3659 |
| 0.1504 | 3.0172 | 3500 | 0.1034 | 0.4944 | 0.3657 |
| 0.1576 | 3.4485 | 4000 | 0.1018 | 0.4947 | 0.3657 |
| 0.1560 | 3.8797 | 4500 | 0.0998 | 0.4908 | 0.3644 |
| 0.1191 | 4.3105 | 5000 | 0.1066 | 0.4940 | 0.3662 |
| 0.1231 | 4.7417 | 5500 | 0.1033 | 0.4900 | 0.3649 |
| 0.0932 | 5.1725 | 6000 | 0.1151 | 0.4966 | 0.3684 |
Framework versions
- Transformers 5.0.0
- Pytorch 2.10.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.2
- Downloads last month
- 257
Model tree for waxal-benchmarking/whisper-small-amh-matewosx
Base model
openai/whisper-small