See axolotl config
axolotl version: 0.10.0
base_model: meta-llama/Llama-3.1-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
# 1:1 mixing: 23k AR docs + 23k DCLM docs (~46k total)
# Axolotl concatenates and shuffles both datasets
datasets:
- path: cfierro/simpleqa_wiki_ar_Llama-3.1-8B-Instruct
type: completion
field: text
split: test
- path: cfierro/dclm_baseline_sampled
type: completion
field: text
split: sampled_23k
dataset_prepared_path: /scratch/project/eu-25-39/knowledge-ft/axolotl/datasets/llama-8b/simpleqa-ar-dclm-1to1
val_set_size: 0.0
output_dir: /scratch/project/eu-25-39/knowledge-ft/axolotl/models/llama-3.1-8b-fft-simpleqa-ar-dclm-1to1
hub_model_id: llama-3.1-8b-fft-simpleqa-ar-dclm-1to1
sequence_len: 4096
sample_packing: true
eval_sample_packing: false
# No LoRA — full fine-tuning
wandb_project: knowledge-ft
wandb_entity: cfierro
wandb_watch:
wandb_name: llama-3.1-8b-fft-simpleqa-ar-dclm-1to1
wandb_log_model: "false"
# Multi-GPU settings
# micro_batch=1 to fit full FT in memory (ZeRO-3 on 4x A100 40GB)
# grad_accum=2 to keep same effective batch size (1 * 2 * 4 GPUs = 8)
# ~46k examples with sample_packing (~6-8 per packed seq) → ~6.5k steps for 1 pass
# Using 6000 steps same as baseline to keep step count comparable
gradient_accumulation_steps: 2
micro_batch_size: 1
max_steps: 6000
optimizer: adamw_bnb_8bit
lr_scheduler: constant
learning_rate: 1e-5
bf16: auto
tf32: false
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
warmup_ratio: 0.03
save_steps: 1000
save_total_limit: 1
load_best_model_at_end: true
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
# DeepSpeed ZeRO Stage 3 - shards model weights, gradients, and optimizer across GPUs
deepspeed: deepspeed_configs/zero3.json
llama-3.1-8b-fft-simpleqa-ar-dclm-1to1
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the cfierro/simpleqa_wiki_ar_Llama-3.1-8B-Instruct and the cfierro/dclm_baseline_sampled datasets.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 37
- training_steps: 6000
Training results
Framework versions
- Transformers 4.57.3
- Pytorch 2.9.0+cu128
- Datasets 3.5.0
- Tokenizers 0.22.2
- Downloads last month
- 5
Model tree for cfierro/llama-3.1-8b-fft-simpleqa-ar-dclm-1to1
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct