wav2vec2-gcf-r11

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2899
  • Wer: 99.92

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 8000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
19.1252 0.9390 400 3.4716 100.0
6.2270 1.8779 800 3.4558 100.0
4.0283 2.8169 1200 3.3956 100.0
2.8016 3.7559 1600 3.2941 99.96
2.2193 4.6948 2000 3.2600 100.0
1.7547 5.6338 2400 3.2170 99.76
1.3964 6.5728 2800 3.3102 101.75
1.1963 7.5117 3200 3.1533 100.75
0.9914 8.4507 3600 3.2595 99.84
0.9010 9.3897 4000 3.1565 101.07
0.8122 10.3286 4400 3.1109 99.64
0.7482 11.2676 4800 3.0935 99.72
0.6616 12.2066 5200 3.1038 99.88
0.6589 13.1455 5600 3.1592 100.6
0.6076 14.0845 6000 3.0983 99.68
0.5958 15.0235 6400 3.1282 99.84
0.6226 15.9624 6800 3.2371 98.77
0.6571 16.9014 7200 3.2466 99.92
0.6573 17.8404 7600 3.3030 99.92
0.6681 18.7793 8000 3.2899 99.92

Framework versions

  • Transformers 5.5.0
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.2
Downloads last month
111
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GwadaDLT/wav2vec2-gcf-r11

Finetuned
(831)
this model