wisper-small-umbundu

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5770
  • Wer: 0.5981

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.2102 9.4340 500 1.1333 0.5466
0.0459 18.8679 1000 1.3514 0.6631
0.003 28.3019 1500 1.4324 0.6576
0.0025 37.7358 2000 1.4792 0.5691
0.0008 47.1698 2500 1.5070 0.5869
0.0003 56.6038 3000 1.5325 0.6020
0.0003 66.0377 3500 1.5505 0.5989
0.0002 75.4717 4000 1.5636 0.6033
0.0002 84.9057 4500 1.5732 0.5965
0.0002 94.3396 5000 1.5770 0.5981

Framework versions

  • Transformers 4.53.2
  • Pytorch 2.6.0+cu124
  • Datasets 2.14.4
  • Tokenizers 0.21.2
Downloads last month
3
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for misterkissi/whisper-small-umbundu

Finetuned
(3445)
this model

Space using misterkissi/whisper-small-umbundu 1