whisper-large-telephone-v1

This model is a fine-tuned version of openai/whisper-large-v3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4392
  • Wer: 52.2874

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1250
  • num_epochs: 12
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.0345 0.1416 500 1.4197 109.5730
0.8638 0.2832 1000 1.2463 80.6058
0.8131 0.4248 1500 1.1927 89.7440
0.7539 0.5664 2000 1.1132 86.8466
0.7429 0.7080 2500 1.0692 78.1367
0.7524 0.8496 3000 1.0429 67.9277
0.6429 0.9912 3500 0.9951 79.6665
0.5777 1.1328 4000 0.9834 70.1924
0.5681 1.2744 4500 0.9818 63.9629
0.509 1.4160 5000 0.9620 60.5350
0.5353 1.5576 5500 0.9432 65.0628
0.4851 1.6992 6000 0.9316 58.2249
0.501 1.8408 6500 0.9281 67.6438
0.5861 1.9824 7000 0.9029 60.6275
0.3368 2.1240 7500 0.9508 59.1756
0.3633 2.2656 8000 0.9263 55.8807
0.3257 2.4073 8500 0.9449 57.2272
0.361 2.5489 9000 0.9355 61.9156
0.337 2.6905 9500 0.9372 58.0384
0.3408 2.8321 10000 0.9153 56.0965
0.3362 2.9737 10500 0.9213 57.5663
0.196 3.1153 11000 0.9820 57.2597
0.1815 3.2569 11500 1.0006 55.0728
0.178 3.3985 12000 1.0060 57.1818
0.2 3.5401 12500 0.9999 54.7241
0.1896 3.6817 13000 0.9970 54.0346
0.1881 3.8233 13500 0.9912 56.3187
0.2182 3.9649 14000 0.9785 55.4233
0.0855 4.1065 14500 1.0754 54.8181
0.1033 4.2481 15000 1.0664 55.2594
0.0969 4.3897 15500 1.0600 55.6406
0.0985 4.5313 16000 1.0799 57.9232
0.094 4.6729 16500 1.1025 56.3755
0.1026 4.8145 17000 1.0939 54.8814
0.1029 4.9561 17500 1.0987 54.8765
0.0444 5.0977 18000 1.1591 54.4142
0.047 5.2393 18500 1.1655 57.5711
0.0519 5.3809 19000 1.1609 55.7461
0.0479 5.5225 19500 1.1737 54.6381
0.046 5.6641 20000 1.1773 55.1053
0.0448 5.8057 20500 1.1837 54.6413
0.0422 5.9473 21000 1.1810 55.3535
0.0247 6.0889 21500 1.2235 54.7095
0.0267 6.2305 22000 1.2358 54.3655
0.0222 6.3721 22500 1.2313 55.2367
0.0281 6.5137 23000 1.2455 54.3380
0.0231 6.6553 23500 1.2549 53.6096
0.0278 6.7969 24000 1.2543 53.9308
0.0276 6.9385 24500 1.2500 53.9389
0.0121 7.0801 25000 1.2829 53.9535
0.0164 7.2218 25500 1.2841 53.6031
0.0164 7.3634 26000 1.2826 55.3000
0.0139 7.5050 26500 1.2754 53.8237
0.0141 7.6466 27000 1.3143 54.2244
0.0157 7.7882 27500 1.2910 53.2981
0.0155 7.9298 28000 1.2914 54.4580
0.0083 8.0714 28500 1.3115 53.1066
0.0082 8.2130 29000 1.3224 53.2072
0.0088 8.3546 29500 1.3260 54.2455
0.008 8.4962 30000 1.3389 52.7579
0.0065 8.6378 30500 1.3449 54.0897
0.0086 8.7794 31000 1.3381 54.0719
0.0111 8.9210 31500 1.3331 53.9843
0.005 9.0626 32000 1.3531 53.3403
0.0033 9.2042 32500 1.3539 53.9535
0.0041 9.3458 33000 1.3654 52.9055
0.0027 9.4874 33500 1.3526 53.0628
0.0038 9.6290 34000 1.3802 52.8730
0.0054 9.7706 34500 1.3724 53.2235
0.003 9.9122 35000 1.3734 53.4635
0.0024 10.0538 35500 1.3829 52.5421
0.002 10.1954 36000 1.3788 52.9347
0.0034 10.3370 36500 1.3749 52.8146
0.0013 10.4786 37000 1.3951 52.4269
0.0014 10.6202 37500 1.3997 52.6865
0.0015 10.7618 38000 1.3900 52.6589
0.0013 10.9034 38500 1.4116 52.4513
0.0014 11.0450 39000 1.4101 52.2014
0.0032 11.1866 39500 1.4156 52.5113
0.0005 11.3282 40000 1.4236 52.3182
0.0003 11.4698 40500 1.4343 52.2890
0.0002 11.6114 41000 1.4376 52.2063
0.0003 11.7530 41500 1.4428 52.1998
0.0002 11.8946 42000 1.4392 52.2874

Framework versions

  • Transformers 4.54.0.dev0
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
2
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for samil24/whisper-large-telephone-v1

Finetuned
(814)
this model