Whisper Small N- Augmented

This model is a fine-tuned version of openai/whisper-small on the N Demo Final (Augmented) dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5186
  • Wer: 52.4803
  • Cer: 23.2548

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 0.1
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
1.4115 0.9965 285 0.6066 75.3937 27.9735
1.0703 1.9930 570 0.4821 64.8031 22.3183
0.8049 2.9895 855 0.4383 60.8268 23.3196
0.6565 3.9860 1140 0.4189 55.0787 19.8473
0.5014 4.9825 1425 0.4191 52.5591 18.8171
0.3444 5.9790 1710 0.4216 52.0472 19.9769
0.2355 6.9755 1995 0.4462 49.3307 18.1975
0.1876 7.9720 2280 0.4599 51.3780 19.2061
0.1180 8.9685 2565 0.4719 49.0157 17.7941
0.0884 9.9650 2850 0.4888 49.4094 18.0895
0.0712 10.9615 3135 0.5017 48.6220 17.7509
0.0525 11.9580 3420 0.5097 48.4252 17.5420
0.0477 12.9545 3705 0.5152 49.1732 18.4353
0.0469 13.9510 3990 0.5191 51.9291 23.0891
0.0473 14.9476 4275 0.5186 52.4803 23.2548

Framework versions

  • Transformers 5.2.0
  • Pytorch 2.9.0+cu128
  • Datasets 4.5.0
  • Tokenizers 0.22.2
Downloads last month
2
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for wandererupak/whisper-small-n-demo-final-augmented-new-data-X

Finetuned
(3445)
this model

Evaluation results