You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

baoule

This model is a fine-tuned version of openai/whisper-small on the abdouaziz/baoule2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2863
  • Wer: 0.1696

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 48000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4736 2.4510 500 0.2696 0.3169
0.0515 4.9020 1000 0.2703 0.1995
0.0185 7.3529 1500 0.2761 0.2021
0.0112 9.8039 2000 0.2853 0.2609
0.0083 12.2549 2500 0.2922 0.1829
0.0073 14.7059 3000 0.2884 0.1711
0.0066 17.1569 3500 0.2849 0.1724
0.0063 19.6078 4000 0.2863 0.1696
0.0053 22.0588 4500 0.2889 0.1768
0.0053 24.5098 5000 0.2951 0.1757
0.0047 26.9608 5500 0.2844 0.1837

Framework versions

  • Transformers 4.46.0
  • Pytorch 2.7.0+cu126
  • Datasets 3.3.2
  • Tokenizers 0.20.3
Downloads last month
136
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for abdouaziiz/baoule

Finetuned
(3432)
this model

Evaluation results