llama_3_gsm8k_midset_helpful
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5614
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.1152 | 0.04 | 5 | 1.0754 |
| 0.8157 | 0.08 | 10 | 0.7806 |
| 0.7435 | 0.12 | 15 | 0.6954 |
| 0.6353 | 0.16 | 20 | 0.6491 |
| 0.6048 | 0.2 | 25 | 0.6166 |
| 0.5449 | 0.24 | 30 | 0.6001 |
| 0.6039 | 0.28 | 35 | 0.5862 |
| 0.6182 | 0.32 | 40 | 0.5862 |
| 0.5985 | 0.36 | 45 | 0.5789 |
| 0.5475 | 0.4 | 50 | 0.5770 |
| 0.5766 | 0.44 | 55 | 0.5765 |
| 0.5648 | 0.48 | 60 | 0.5759 |
| 0.5472 | 0.52 | 65 | 0.5727 |
| 0.5047 | 0.56 | 70 | 0.5739 |
| 0.5853 | 0.6 | 75 | 0.5697 |
| 0.5506 | 0.64 | 80 | 0.5700 |
| 0.5399 | 0.68 | 85 | 0.5689 |
| 0.5665 | 0.72 | 90 | 0.5676 |
| 0.5807 | 0.76 | 95 | 0.5659 |
| 0.563 | 0.8 | 100 | 0.5647 |
| 0.5776 | 0.84 | 105 | 0.5625 |
| 0.5238 | 0.88 | 110 | 0.5622 |
| 0.5207 | 0.92 | 115 | 0.5607 |
| 0.6012 | 0.96 | 120 | 0.5605 |
| 0.5395 | 1.0 | 125 | 0.5603 |
| 0.4443 | 1.04 | 130 | 0.5597 |
| 0.4903 | 1.08 | 135 | 0.5613 |
| 0.4373 | 1.12 | 140 | 0.5649 |
| 0.4675 | 1.16 | 145 | 0.5657 |
| 0.5231 | 1.2 | 150 | 0.5642 |
| 0.4862 | 1.24 | 155 | 0.5634 |
| 0.4808 | 1.28 | 160 | 0.5613 |
| 0.4593 | 1.32 | 165 | 0.5618 |
| 0.5099 | 1.3600 | 170 | 0.5618 |
| 0.4777 | 1.4 | 175 | 0.5618 |
| 0.4927 | 1.44 | 180 | 0.5615 |
| 0.4805 | 1.48 | 185 | 0.5615 |
| 0.4681 | 1.52 | 190 | 0.5619 |
| 0.4562 | 1.56 | 195 | 0.5610 |
| 0.471 | 1.6 | 200 | 0.5614 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 47
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for CharlesLi/llama_3_gsm8k_midset_helpful
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct