Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

whisper-5hrs-meta-seed333

This model is a fine-tuned version of openai/whisper-large-v2 on the JASMIN-CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4032
  • Wer: 18.9888

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 48
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 12
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9872 0.2927 12 1.1235 36.6189
0.8638 0.5854 24 0.8173 34.8106
0.5604 0.8780 36 0.5658 26.0509
0.444 1.1707 48 0.4771 19.8712
0.432 1.4634 60 0.4417 18.9553
0.3875 1.7561 72 0.4236 19.6397
0.3616 2.0488 84 0.4130 20.2872
0.3057 2.3415 96 0.4078 20.2402
0.3899 2.6341 108 0.4046 19.0392
0.3609 2.9268 120 0.4032 18.9888

Framework versions

  • PEFT 0.16.0
  • Transformers 4.57.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.22.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for greenw0lf/whisper-5hrs-meta-seed333

Adapter
(324)
this model

Evaluation results