EdBerg commited on
Commit
5e5af08
·
verified ·
1 Parent(s): 0929ef1

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -14
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
- base_model: meta-llama/Llama-3.2-3B-Instruct
3
  library_name: peft
4
- license: llama3.2
 
5
  tags:
6
- - trl
 
7
  - sft
8
- - generated_from_trainer
 
 
9
  model-index:
10
  - name: Baha_2A
11
  results: []
@@ -16,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # Baha_2A
18
 
19
- This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
20
 
21
  ## Model description
22
 
@@ -36,15 +39,15 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
- - train_batch_size: 16
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 64
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.03
47
- - training_steps: 1100
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
@@ -53,8 +56,8 @@ The following hyperparameters were used during training:
53
 
54
  ### Framework versions
55
 
56
- - PEFT 0.13.1.dev0
57
- - Transformers 4.45.1
58
- - Pytorch 2.4.1+cu121
59
- - Datasets 3.0.1
60
- - Tokenizers 0.20.0
 
1
  ---
 
2
  library_name: peft
3
+ license: llama3
4
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
5
  tags:
6
+ - base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
7
+ - lora
8
  - sft
9
+ - transformers
10
+ - trl
11
+ pipeline_tag: text-generation
12
  model-index:
13
  - name: Baha_2A
14
  results: []
 
19
 
20
  # Baha_2A
21
 
22
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
+ - train_batch_size: 4
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 16
47
+ - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_ratio: 0.03
50
+ - training_steps: 500
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
 
56
 
57
  ### Framework versions
58
 
59
+ - PEFT 0.18.1.dev0
60
+ - Transformers 4.57.2
61
+ - Pytorch 2.9.0+cu126
62
+ - Datasets 4.0.0
63
+ - Tokenizers 0.22.1