Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

exp2-whisper-child-classified

This model is a fine-tuned version of openai/whisper-large-v2 on the JASMIN-CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4059
  • Wer: 19.8242

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 48
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 54
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.0368 0.1397 25 1.2168 38.0313
0.9962 0.2793 50 1.1738 37.1423
1.0082 0.4190 75 1.0938 36.2465
0.8819 0.5587 100 0.9987 34.8375
0.8438 0.6983 125 0.8940 34.3174
0.7026 0.8380 150 0.7810 32.3984
0.6271 0.9777 175 0.6621 32.2709
0.5367 1.1173 200 0.5746 29.4729
0.5252 1.2570 225 0.5207 26.1683
0.4623 1.3966 250 0.4818 23.5750
0.4399 1.5363 275 0.4557 23.7394
0.4141 1.6760 300 0.4407 21.9814
0.4084 1.8156 325 0.4310 21.3742
0.416 1.9553 350 0.4240 20.8105
0.3979 2.0950 375 0.4190 20.6696
0.3963 2.2346 400 0.4152 20.0758
0.4103 2.3743 425 0.4119 19.5525
0.4087 2.5140 450 0.4094 19.9349
0.4359 2.6536 475 0.4076 19.8980
0.4066 2.7933 500 0.4065 19.8544
0.3824 2.9330 525 0.4059 19.8242

Framework versions

  • PEFT 0.16.0
  • Transformers 4.52.0
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for greenw0lf/exp2-whisper-child-classified

Adapter
(322)
this model

Collection including greenw0lf/exp2-whisper-child-classified

Evaluation results