Configuration Parsing Warning:In adapter_config.json: "peft.task_type" must be a string

whisper-small-tigrinya-linear-r-32

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Cer: 15.6852
  • Loss: 0.4011
  • Wer: 43.8615

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10

Training results

Training Loss Epoch Step Cer Validation Loss Wer
1.7104 0.6680 250 19.7292 0.4185 53.0521
1.2726 1.3340 500 16.7111 0.3549 46.3354
1.2303 2.0 750 15.7563 0.3362 44.6742
1.0308 2.6680 1000 15.2333 0.3258 43.4043
0.8122 3.3340 1250 15.7040 0.3314 43.8197
0.8717 4.0 1500 14.8800 0.3282 43.3715
0.7094 4.6680 1750 15.3272 0.3393 42.9562
0.5863 5.3340 2000 15.4644 0.3518 43.6434
0.6630 6.0 2250 15.3741 0.3435 43.0697
0.5734 6.6680 2500 15.3421 0.3620 43.5507
0.4406 7.3340 2750 15.5398 0.3816 43.8764
0.4993 8.0 3000 15.7726 0.3810 44.0975
0.4340 8.6680 3250 15.7936 0.3929 44.1872
0.3487 9.3340 3500 15.7022 0.4212 44.0258
0.4169 10.0 3750 15.6852 0.4011 43.8615

Framework versions

  • PEFT 0.18.1
  • Transformers 5.2.0
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aregay01/whisper-small-tigrinya-linear-r-32

Adapter
(216)
this model