Whisper for Turkish Call Centers
This model is a fine-tuned version of openai/whisper-large-v2 on the Custom turkish call center simulated data dataset. It achieves the following results on the evaluation set:
- Loss: 0.2582
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 4500
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.3063 | 0.1777 | 250 | 0.3142 |
| 0.2611 | 0.3554 | 500 | 0.2932 |
| 0.2722 | 0.5330 | 750 | 0.2808 |
| 0.2908 | 0.7107 | 1000 | 0.2748 |
| 0.2686 | 0.8884 | 1250 | 0.2703 |
| 0.2464 | 1.0661 | 1500 | 0.2684 |
| 0.2418 | 1.2438 | 1750 | 0.2667 |
| 0.2371 | 1.4215 | 2000 | 0.2629 |
| 0.2327 | 1.5991 | 2250 | 0.2615 |
| 0.3082 | 1.7768 | 2500 | 0.2596 |
| 0.2448 | 1.9545 | 2750 | 0.2576 |
| 0.2332 | 2.1322 | 3000 | 0.2605 |
| 0.2258 | 2.3099 | 3250 | 0.2599 |
| 0.1708 | 2.4876 | 3500 | 0.2586 |
| 0.2318 | 2.6652 | 3750 | 0.2586 |
| 0.2174 | 2.8429 | 4000 | 0.2583 |
| 0.2212 | 3.0206 | 4250 | 0.2581 |
| 0.2098 | 3.1983 | 4500 | 0.2582 |
Framework versions
- Transformers 5.3.0
- Pytorch 2.10.0+cu128
- Datasets 4.8.4
- Tokenizers 0.22.2
- Downloads last month
- 69
Model tree for alpcansoydas/whisper-large-v2-tr-ft-27-03-26-encoder-and-2decoder-25ksamples-simulated-data
Base model
openai/whisper-large-v2