wav2vec2-l-xls-r-300m-official

This model is a fine-tuned version of Khalsuu/filipino-wav2vec2-l-xls-r-300m-official on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8308
  • Cer: 0.1332

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
7.8367 1.1905 100 3.5769 0.4891
2.2149 2.3810 200 1.1669 0.1616
1.3048 3.5714 300 0.9331 0.1494
1.0685 4.7619 400 0.8628 0.1491
0.9243 5.9524 500 0.8304 0.1341
0.7994 7.1429 600 0.8677 0.1385
0.7299 8.3333 700 0.8713 0.1352
0.6595 9.5238 800 0.8855 0.1350
0.6130 10.7143 900 0.9014 0.1340
0.5657 11.9048 1000 0.9150 0.1330
0.5385 13.0952 1100 0.9400 0.1336
0.5035 14.2857 1200 0.9942 0.1312
0.4649 15.4762 1300 1.0154 0.1295
0.4608 16.6667 1400 1.0111 0.1304
0.4340 17.8571 1500 1.0336 0.1297
0.4239 19.0476 1600 1.0370 0.1316
0.4089 20.2381 1700 1.0630 0.1315
0.3990 21.4286 1800 1.0666 0.1295
0.3989 22.6190 1900 1.0710 0.1295
0.3789 23.8095 2000 1.0815 0.1310

Framework versions

  • Transformers 5.2.0
  • Pytorch 2.9.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
1
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for waelhasan/wav2vec2-l-xls-r-300m-official

Finetuned
(1)
this model