whisper-small-kurmanji-v10

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5636
  • Wer: 16.0465

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.539 0.4941 500 0.6416 35.1020
0.3446 0.9881 1000 0.4576 24.7569
0.242 1.4822 1500 0.3910 21.1218
0.2412 1.9763 2000 0.3525 19.9057
0.1419 2.4704 2500 0.3544 18.5183
0.1282 2.9644 3000 0.3399 17.3549
0.0793 3.4585 3500 0.3527 17.9382
0.0837 3.9526 4000 0.3547 17.3417
0.0438 4.4466 4500 0.3792 16.9693
0.0409 4.9407 5000 0.3831 17.0319
0.0206 5.4348 5500 0.4115 17.2923
0.0219 5.9289 6000 0.4062 16.5870
0.0089 6.4229 6500 0.4445 16.9528
0.0089 6.9170 7000 0.4532 16.8540
0.0061 7.4111 7500 0.4590 16.5804
0.0077 7.9051 8000 0.4735 16.3366
0.0033 8.3992 8500 0.4813 16.5178
0.0028 8.8933 9000 0.4968 16.8144
0.0016 9.3874 9500 0.5113 16.5013
0.0026 9.8814 10000 0.5040 16.6892
0.001 10.3755 10500 0.5144 16.2377
0.0008 10.8696 11000 0.5250 15.9740
0.0006 11.3636 11500 0.5317 16.0202
0.0005 11.8577 12000 0.5387 16.0235
0.0004 12.3518 12500 0.5498 16.0960
0.0004 12.8458 13000 0.5582 16.0399
0.0003 13.3399 13500 0.5577 15.9543
0.0003 13.8340 14000 0.5590 15.9477
0.0002 14.3281 14500 0.5609 15.9938
0.0003 14.8221 15000 0.5636 16.0465

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.5.1+cu121
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for samil24/whisper-small-kurmanji-v10

Finetuned
(3443)
this model