whisper-large-v3-turbo-ESLO_REP

This model is a fine-tuned version of openai/whisper-large-v3-turbo on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3035
  • Wer: 65.8438

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.0684 1.0 299 0.9910 67.5694
0.8539 2.0 598 0.9916 74.2320
0.6497 3.0 897 0.9486 61.4563
0.4445 4.0 1196 0.9633 70.8195
0.3109 5.0 1495 1.0225 86.3886
0.2237 6.0 1794 1.0533 68.8076
0.1556 7.0 2093 1.0990 66.6099
0.1219 8.0 2392 1.1393 65.8671
0.0916 9.0 2691 1.1718 67.8558
0.069 10.0 2990 1.1896 66.9659
0.0592 11.0 3289 1.2018 69.3957
0.0517 12.0 3588 1.2355 66.6022
0.0416 13.0 3887 1.2457 66.2230
0.0335 14.0 4186 1.2507 65.9290
0.0261 15.0 4485 1.3035 65.8438

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.0
Downloads last month
1
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Rziane/whisper-large-v3-turbo-ESLO_REP

Finetuned
(511)
this model