MMS_cdli_ugandan_english_nonstandard_speech_finetune_v1
This model is a fine-tuned version of facebook/mms-300m on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0944
- Model Preparation Time: 0.0089
- Wer: 0.4658
- Cer: 0.2311
- Normalized Wer: 0.4269
- Normalized Cer: 0.2206
- Semantic Error: 0.2363
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer | Normalized Wer | Normalized Cer | Semantic Error |
|---|---|---|---|---|---|---|---|---|---|
| 2.5623 | 3.0864 | 1000 | 2.4865 | 0.0089 | 1.0 | 0.7708 | 1.0 | 0.7672 | 0.9412 |
| 0.4631 | 6.1728 | 2000 | 1.2688 | 0.0089 | 0.5892 | 0.2926 | 0.5495 | 0.2802 | 0.3809 |
| 0.2057 | 9.2593 | 3000 | 1.2994 | 0.0089 | 0.5759 | 0.2897 | 0.5344 | 0.2788 | 0.3366 |
| 0.1254 | 12.3457 | 4000 | 1.4122 | 0.0089 | 0.5553 | 0.2881 | 0.5162 | 0.2771 | 0.3316 |
| 0.0863 | 15.4321 | 5000 | 1.6442 | 0.0089 | 0.5703 | 0.2934 | 0.5259 | 0.2803 | 0.2768 |
| 0.0424 | 18.5185 | 6000 | 1.8348 | 0.0089 | 0.5611 | 0.2893 | 0.5206 | 0.2783 | 0.3313 |
| 0.0189 | 21.6049 | 7000 | 2.1363 | 0.0089 | 0.5695 | 0.3017 | 0.5274 | 0.2886 | 0.3647 |
Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu129
- Datasets 2.18.0
- Tokenizers 0.22.0
- Downloads last month
- 212
Model tree for OyellaA/MMS_cdli_ugandan_english_nonstandard_speech_finetune_v1
Base model
facebook/mms-300m