Whisper fine-tuned on FluencyBank — openai/whisper-small
This model is a fine-tuned version of openai/whisper-small on the FluencyBank Timestamped dataset. It achieves the following results on the evaluation set:
- Loss: 1.9714
- Wer: 14.7220
- Cer: 8.4574
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
- label_smoothing_factor: 0.1
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|---|---|---|---|---|---|
| 1.5202 | 11.6279 | 250 | 1.7446 | 14.2167 | 10.7879 |
| 1.4329 | 23.2558 | 500 | 1.8143 | 13.4476 | 7.7609 |
| 1.4254 | 34.8837 | 750 | 1.8473 | 13.2938 | 7.7063 |
| 1.4211 | 46.5116 | 1000 | 1.9020 | 13.8651 | 7.8338 |
| 1.4201 | 58.1395 | 1250 | 1.9092 | 13.8211 | 7.8611 |
| 1.4189 | 69.7674 | 1500 | 1.9383 | 14.1727 | 8.1615 |
| 1.418 | 81.3953 | 1750 | 1.9574 | 14.3265 | 8.2161 |
| 1.4177 | 93.0233 | 2000 | 1.9669 | 14.5023 | 8.3572 |
| 1.4176 | 104.6512 | 2250 | 1.9709 | 14.5902 | 8.3663 |
| 1.4175 | 116.2791 | 2500 | 1.9714 | 14.7220 | 8.4574 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.10.0+cu128
- Datasets 4.0.0
- Tokenizers 0.20.3
- Downloads last month
- 40
Model tree for arielcerdap/whisper-small-fluencybank
Base model
openai/whisper-smallDataset used to train arielcerdap/whisper-small-fluencybank
Evaluation results
- Wer on FluencyBank Timestampedself-reported14.722