2026-02-25_09-56-36

This model is a fine-tuned version of openai/whisper-small.en on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0773
  • Wer: 11.1749

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
2.1826 0.1123 500 1.2597 13.2054
1.1149 0.2247 1000 1.1731 12.1589
1.0703 0.3370 1500 1.1441 11.9967
1.0787 0.4493 2000 1.1242 11.7956
1.0588 0.5617 2500 1.1149 11.5296
1.0346 0.6740 3000 1.1049 11.4707
1.0206 0.7863 3500 1.0982 11.3632
1.024 0.8987 4000 1.0873 11.3176
1.0147 1.0110 4500 1.0833 11.1196
0.9595 1.1233 5000 1.0844 11.6402
0.9653 1.2357 5500 1.0825 11.2186
0.9606 1.3480 6000 1.0773 11.1749

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
55
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SatwikDutta/2026-02-25_09-56-36

Finetuned
(70)
this model