whisper-large-v2-ft-cy-2601
This model is a fine-tuned version of openai/whisper-large-v2 on the DewiBrynJones/preprocessed-whisper-btb-cv-cvad-cven-wlga-ca-ec-2601 dataset. It achieves the following results on the evaluation set:
- Loss: 0.3660
- Wer: 0.3715
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.9408 | 0.1966 | 500 | 0.8202 | 0.5409 |
| 0.5988 | 0.3932 | 1000 | 0.5391 | 0.5537 |
| 0.5247 | 0.5899 | 1500 | 0.4744 | 0.4518 |
| 0.4995 | 0.7865 | 2000 | 0.4416 | 0.4371 |
| 0.456 | 0.9831 | 2500 | 0.4165 | 0.4022 |
| 0.3781 | 1.1797 | 3000 | 0.4080 | 0.3901 |
| 0.3756 | 1.3763 | 3500 | 0.3982 | 0.4062 |
| 0.3763 | 1.5729 | 4000 | 0.3892 | 0.3990 |
| 0.3509 | 1.7696 | 4500 | 0.3777 | 0.3831 |
| 0.3599 | 1.9662 | 5000 | 0.3745 | 0.3888 |
| 0.2981 | 2.1628 | 5500 | 0.3758 | 0.3634 |
| 0.29 | 2.3594 | 6000 | 0.3706 | 0.3904 |
| 0.2829 | 2.5560 | 6500 | 0.3677 | 0.3634 |
| 0.2892 | 2.7527 | 7000 | 0.3660 | 0.3789 |
| 0.286 | 2.9493 | 7500 | 0.3657 | 0.3772 |
| 0.2641 | 3.1459 | 8000 | 0.3660 | 0.3715 |
Framework versions
- Transformers 4.57.6
- Pytorch 2.10.0+cu128
- Datasets 4.5.0
- Tokenizers 0.22.2
- Downloads last month
- -
Model tree for DewiBrynJones/whisper-large-v2-ft-cy-2601
Base model
openai/whisper-large-v2