wav2vec2-base-lang-id
This model is a fine-tuned version of facebook/wav2vec2-base on the common_language dataset. It achieves the following results on the evaluation set:
- Loss: 1.2104
- Accuracy: 0.7855
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 2.8262 | 1.0 | 347 | 3.1017 | 0.1703 |
| 1.8912 | 2.0 | 694 | 1.9753 | 0.4147 |
| 1.339 | 3.0 | 1041 | 1.6294 | 0.5352 |
| 0.7847 | 4.0 | 1388 | 1.4546 | 0.6189 |
| 0.5866 | 5.0 | 1735 | 1.2889 | 0.6591 |
| 0.3546 | 6.0 | 2082 | 1.3346 | 0.7065 |
| 0.2172 | 7.0 | 2429 | 1.2969 | 0.7291 |
| 0.1056 | 8.0 | 2776 | 1.1767 | 0.7566 |
| 0.0382 | 9.0 | 3123 | 1.2239 | 0.7731 |
| 0.0551 | 10.0 | 3470 | 1.2104 | 0.7855 |
Framework versions
- Transformers 4.57.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.22.0
- Downloads last month
- 139
Model tree for DongningRao/wav2vec2-base-lang-id
Base model
facebook/wav2vec2-baseDataset used to train DongningRao/wav2vec2-base-lang-id
Evaluation results
- Accuracy on common_languagevalidation set self-reported0.785