whisper-uzbek-multi-dataset

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2215
  • Wer: 24.5188

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4219 0.1577 1000 0.4203 37.8390
0.3467 0.3154 2000 0.3568 35.5245
0.3152 0.4731 3000 0.3146 31.8340
0.2767 0.6309 4000 0.2984 31.7955
0.2908 0.7886 5000 0.2841 30.2510
0.2589 0.9463 6000 0.2702 28.5905
0.1941 1.1039 7000 0.2648 28.2867
0.1778 1.2617 8000 0.2613 27.8530
0.2072 1.4194 9000 0.2538 27.3528
0.1944 1.5771 10000 0.2466 26.9360
0.1889 1.7348 11000 0.2417 26.5371
0.1778 1.8925 12000 0.2361 25.7168
0.1384 2.0502 13000 0.2345 25.6473
0.1287 2.2079 14000 0.2345 25.8865
0.1335 2.3656 15000 0.2314 25.3250
0.1342 2.5233 16000 0.2286 24.9627
0.129 2.6810 17000 0.2255 24.7568
0.1213 2.8387 18000 0.2232 24.5562
0.1255 2.9965 19000 0.2215 24.5188

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.9.0+cu130
  • Datasets 4.2.0
  • Tokenizers 0.22.1
Downloads last month
18
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dovud-Asadov/whisper-uzbek-multi-dataset

Finetuned
(512)
this model