llama_3_alpaca_midset_helpful
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.0099
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.6101 | 0.04 | 5 | 1.8019 |
| 1.5072 | 0.08 | 10 | 1.3098 |
| 1.156 | 0.12 | 15 | 1.2003 |
| 1.0047 | 0.16 | 20 | 1.1432 |
| 1.0703 | 0.2 | 25 | 1.0834 |
| 0.8951 | 0.24 | 30 | 1.0848 |
| 0.958 | 0.28 | 35 | 1.0538 |
| 1.0478 | 0.32 | 40 | 1.0488 |
| 0.9554 | 0.36 | 45 | 1.0497 |
| 0.9492 | 0.4 | 50 | 1.0381 |
| 0.8987 | 0.44 | 55 | 1.0346 |
| 1.0023 | 0.48 | 60 | 1.0310 |
| 0.9009 | 0.52 | 65 | 1.0343 |
| 1.0744 | 0.56 | 70 | 1.0283 |
| 1.0442 | 0.6 | 75 | 1.0278 |
| 0.9359 | 0.64 | 80 | 1.0275 |
| 0.9779 | 0.68 | 85 | 1.0209 |
| 0.9648 | 0.72 | 90 | 1.0263 |
| 0.9716 | 0.76 | 95 | 1.0251 |
| 0.9314 | 0.8 | 100 | 1.0223 |
| 0.9222 | 0.84 | 105 | 1.0225 |
| 0.9168 | 0.88 | 110 | 1.0172 |
| 0.9443 | 0.92 | 115 | 1.0157 |
| 0.9118 | 0.96 | 120 | 1.0106 |
| 0.9033 | 1.0 | 125 | 1.0087 |
| 0.8561 | 1.04 | 130 | 1.0095 |
| 0.7864 | 1.08 | 135 | 1.0143 |
| 0.8036 | 1.12 | 140 | 1.0193 |
| 0.7636 | 1.16 | 145 | 1.0197 |
| 0.8088 | 1.2 | 150 | 1.0189 |
| 0.781 | 1.24 | 155 | 1.0157 |
| 0.8032 | 1.28 | 160 | 1.0130 |
| 0.767 | 1.32 | 165 | 1.0116 |
| 0.7653 | 1.3600 | 170 | 1.0115 |
| 0.8196 | 1.4 | 175 | 1.0128 |
| 0.7688 | 1.44 | 180 | 1.0109 |
| 0.8094 | 1.48 | 185 | 1.0107 |
| 0.83 | 1.52 | 190 | 1.0110 |
| 0.7644 | 1.56 | 195 | 1.0103 |
| 0.8796 | 1.6 | 200 | 1.0099 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 46
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for CharlesLi/llama_3_alpaca_midset_helpful
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct