whisper-medium-tarteel-quraan
This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0112
- Cer: 0.0736
- Wer: 0.1110
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 2000
Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|---|---|---|---|---|---|
| 0.0737 | 0.1145 | 125 | 0.0708 | 0.1345 | 0.3640 |
| 0.0585 | 0.2289 | 250 | 0.0494 | 0.1201 | 0.2884 |
| 0.0399 | 0.3434 | 375 | 0.0418 | 0.0987 | 0.2219 |
| 0.0262 | 0.4579 | 500 | 0.0372 | 0.0946 | 0.2043 |
| 0.0269 | 0.5723 | 625 | 0.0328 | 0.0903 | 0.1869 |
| 0.0415 | 0.6868 | 750 | 0.0306 | 0.0895 | 0.1774 |
| 0.0287 | 0.8013 | 875 | 0.0268 | 0.0871 | 0.1635 |
| 0.0246 | 0.9158 | 1000 | 0.0234 | 0.0823 | 0.1520 |
| 0.0160 | 1.0302 | 1125 | 0.0228 | 0.0824 | 0.1474 |
| 0.0190 | 1.1447 | 1250 | 0.0204 | 0.0782 | 0.1356 |
| 0.0176 | 1.2592 | 1375 | 0.0192 | 0.0797 | 0.1368 |
| 0.0091 | 1.3736 | 1500 | 0.0170 | 0.0774 | 0.1284 |
| 0.0123 | 1.4881 | 1625 | 0.0155 | 0.0768 | 0.1271 |
| 0.0080 | 1.6026 | 1750 | 0.0134 | 0.0746 | 0.1175 |
| 0.0055 | 1.7170 | 1875 | 0.0121 | 0.0748 | 0.1154 |
| 0.0080 | 1.8315 | 2000 | 0.0112 | 0.0736 | 0.1110 |
Framework versions
- Transformers 5.1.0
- Pytorch 2.4.1+cu124
- Datasets 2.20.0
- Tokenizers 0.22.2
- Downloads last month
- 1
Model tree for dmoayad/whisper-medium-tarteel-quraan
Base model
openai/whisper-medium