quran-asr
This model is a fine-tuned version of uzair0/quran-asr on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0083
- Wer: 0.6094
- Cer: 0.1724
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 1869940075
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- training_steps: 15000
Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|---|---|---|---|---|---|
| 0.3657 | 4.5669 | 2000 | 0.2276 | 1.0013 | 0.759 |
| 0.2776 | 9.1326 | 4000 | 0.2153 | 0.9888 | 0.7128 |
| 0.2465 | 13.6994 | 6000 | 0.201 | 1.0453 | 0.7355 |
| 0.1763 | 18.2651 | 8000 | 0.1908 | 1.0804 | 0.733 |
| 0.1584 | 22.832 | 10000 | 0.1915 | 1.0795 | 0.7153 |
| 0.1762 | 27.3977 | 12000 | 1.0009 | 0.6368 | 0.1796 |
| 0.1373 | 31.9646 | 14000 | 1.0083 | 0.6094 | 0.1724 |
Framework versions
- Transformers 5.0.0
- Pytorch 2.10.0+cu128
- Datasets 4.8.3
- Tokenizers 0.22.2
- Downloads last month
- 1,070
Model tree for uzair0/quran-asr
Unable to build the model tree, the base model loops to the model itself. Learn more.