Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

exp1-whisper-jasmin-child

This model is a fine-tuned version of openai/whisper-large-v2 on the JASMIN-CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3635
  • Wer: 17.3349

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 48
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 81
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.0258 0.1838 50 1.1952 37.8166
0.9199 0.3676 100 1.0455 35.2233
0.7782 0.5515 150 0.8236 32.5159
0.5719 0.7353 200 0.5861 29.6004
0.4533 0.9191 250 0.4692 23.8233
0.4438 1.1029 300 0.4243 21.0689
0.436 1.2868 350 0.4051 19.8175
0.4189 1.4706 400 0.3933 19.4552
0.4044 1.6544 450 0.3844 19.2170
0.3686 1.8382 500 0.3781 17.7509
0.3758 2.0221 550 0.3734 17.7408
0.4074 2.2059 600 0.3697 17.5697
0.3681 2.3897 650 0.3669 17.4523
0.3604 2.5735 700 0.3650 17.3617
0.3876 2.7574 750 0.3640 17.3483
0.3905 2.9412 800 0.3635 17.3349

Framework versions

  • PEFT 0.16.0
  • Transformers 4.52.0
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for greenw0lf/exp1-whisper-jasmin-child

Adapter
(322)
this model

Collection including greenw0lf/exp1-whisper-jasmin-child

Evaluation results