easycall-whisper-lg-3-Nov29

This model is a fine-tuned version of openai/whisper-large-v3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0857
  • Wer: 8.1395

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.947 0.2151 100 0.2001 26.1628
0.1574 0.4301 200 0.1477 19.4767
0.1189 0.6452 300 0.1046 15.3101
0.098 0.8602 400 0.0869 12.9845
0.0896 1.0753 500 0.0847 12.6938
0.0591 1.2903 600 0.0853 10.6589
0.0699 1.5054 700 0.0784 9.1085
0.0724 1.7204 800 0.0865 11.8217
0.0704 1.9355 900 0.0701 9.1085
0.0508 2.1505 1000 0.0835 9.5930
0.0447 2.3656 1100 0.0760 10.0775
0.0426 2.5806 1200 0.0716 8.8178
0.0535 2.7957 1300 0.0703 10.3682
0.052 3.0108 1400 0.0714 8.6240
0.0336 3.2258 1500 0.0733 22.4806
0.0448 3.4409 1600 0.0616 9.0116
0.0421 3.6559 1700 0.0751 9.1085
0.031 3.8710 1800 0.0723 8.6240
0.0285 4.0860 1900 0.0755 8.3333
0.0233 4.3011 2000 0.0713 7.6550
0.0331 4.5161 2100 0.0880 9.6899
0.0278 4.7312 2200 0.0766 8.4302
0.0342 4.9462 2300 0.0863 10.9496
0.0275 5.1613 2400 0.0929 9.3023
0.0224 5.3763 2500 0.0851 17.7326
0.0232 5.5914 2600 0.0964 10.4651
0.0283 5.8065 2700 0.0766 9.7868
0.0336 6.0215 2800 0.0729 8.5271
0.0202 6.2366 2900 0.0802 8.8178
0.02 6.4516 3000 0.0864 9.2054
0.0203 6.6667 3100 0.0841 10.8527
0.0292 6.8817 3200 0.0811 9.1085
0.0211 7.0968 3300 0.0752 8.7209
0.0161 7.3118 3400 0.0857 8.1395

Framework versions

  • Transformers 4.43.4
  • Pytorch 2.4.1
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sqrk/easycall-whisper-lg-3-Nov29

Finetuned
(814)
this model