Llama32-1b-distortion-fold-4-v1
This model is a fine-tuned version of meta-llama/Llama-3.2-1B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 5.4736
- Accuracy: 0.3918
- Precision Macro: 0.3863
- Recall Macro: 0.3570
- F1 Macro: 0.3531
- Precision Weighted: 0.3916
- Recall Weighted: 0.3918
- F1 Weighted: 0.3751
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | Precision Weighted | Recall Weighted | F1 Weighted |
|---|---|---|---|---|---|---|---|---|---|---|
| 2.5618 | 1.0 | 60 | 2.0525 | 0.2727 | 0.4273 | 0.2465 | 0.2251 | 0.4414 | 0.2727 | 0.2451 |
| 1.4404 | 2.0 | 120 | 2.2839 | 0.3166 | 0.3545 | 0.3113 | 0.3052 | 0.3697 | 0.3166 | 0.3066 |
| 0.8862 | 3.0 | 180 | 2.5657 | 0.3699 | 0.4391 | 0.3459 | 0.3410 | 0.4479 | 0.3699 | 0.3590 |
| 0.4045 | 4.0 | 240 | 4.3463 | 0.3950 | 0.4115 | 0.3679 | 0.3634 | 0.4199 | 0.3950 | 0.3860 |
| 0.1668 | 5.0 | 300 | 5.5413 | 0.3197 | 0.3942 | 0.2885 | 0.2732 | 0.3970 | 0.3197 | 0.2938 |
| 0.0772 | 6.0 | 360 | 6.2516 | 0.3323 | 0.3577 | 0.3306 | 0.3108 | 0.3881 | 0.3323 | 0.3251 |
| 0.0871 | 7.0 | 420 | 5.4736 | 0.3918 | 0.3863 | 0.3570 | 0.3531 | 0.3916 | 0.3918 | 0.3751 |
Framework versions
- Transformers 4.57.6
- Pytorch 2.9.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.2
- Downloads last month
- 5
Model tree for Kudod/Llama32-1b-distortion-fold-4-v1
Base model
meta-llama/Llama-3.2-1B