wav2vec2-gcf-r10
This model is a fine-tuned version of LLL-CREAM/wav2vec2-HAT-0.2K-ALH-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 3.8386
- Wer: 96.07
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 16.5882 | 0.9390 | 400 | 3.9077 | 96.07 |
| 7.5377 | 1.8779 | 800 | 3.8877 | 96.07 |
| 7.5493 | 2.8169 | 1200 | 3.8616 | 96.07 |
| 7.5156 | 3.7559 | 1600 | 3.8840 | 96.07 |
| 7.5458 | 4.6948 | 2000 | 3.8063 | 96.07 |
| 7.5305 | 5.6338 | 2400 | 3.9435 | 96.07 |
| 7.7807 | 6.5728 | 2800 | 3.8802 | 96.07 |
| 7.4793 | 7.5117 | 3200 | 3.9080 | 96.07 |
| 7.5548 | 8.4507 | 3600 | 3.8987 | 96.07 |
| 7.5145 | 9.3897 | 4000 | 3.8322 | 96.07 |
| 7.5244 | 10.3286 | 4400 | 3.8388 | 96.07 |
| 7.5244 | 11.2676 | 4800 | 3.8251 | 96.07 |
| 7.5201 | 12.2066 | 5200 | 3.8584 | 96.07 |
| 7.5033 | 13.1455 | 5600 | 3.8279 | 96.07 |
| 7.5042 | 14.0845 | 6000 | 3.8672 | 96.07 |
| 7.5237 | 15.0235 | 6400 | 3.8980 | 96.07 |
| 7.5074 | 15.9624 | 6800 | 3.8657 | 96.07 |
| 7.5174 | 16.9014 | 7200 | 3.8584 | 96.07 |
| 7.5236 | 17.8404 | 7600 | 3.8521 | 96.07 |
| 7.4992 | 18.7793 | 8000 | 3.8386 | 96.07 |
Framework versions
- Transformers 5.5.0
- Pytorch 2.4.1+cu124
- Datasets 3.6.0
- Tokenizers 0.22.2
- Downloads last month
- 230
Model tree for GwadaDLT/wav2vec2-gcf-r10
Base model
LLL-CREAM/wav2vec2-HAT-0.2K-ALH-base