YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

llama-3.1-8b-fft-simpleqa-ar-dclm-1to9

Training Hyperparameters

Parameter Value
learning_rate 1e-05
num_train_epochs 1.0
per_device_train_batch_size 1
gradient_accumulation_steps 2
weight_decay 0.0
warmup_ratio 0.0
warmup_steps 180
lr_scheduler_type SchedulerType.CONSTANT
optim OptimizerNames.ADAMW_BNB
bf16 True
fp16 False
max_grad_norm 1.0
max_steps 6000
save_steps 1000
deepspeed {'bf16': {'enabled': 'auto'}, 'zero_optimization': {'stage': 3, 'offload_optimizer': {'device': 'none'}, 'offload_param': {'device': 'none'}, 'overlap_comm': True, 'contiguous_gradients': True, 'reduce_bucket_size': 500000000.0, 'stage3_prefetch_bucket_size': 400000000.0, 'stage3_param_persistence_threshold': 1000000.0, 'stage3_gather_16bit_weights_on_model_save': True}, 'gradient_accumulation_steps': 'auto', 'gradient_clipping': 'auto', 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'wall_clock_breakdown': False}
gradient_checkpointing True

Training Results

  • Total steps: 6000
  • Best metric: None
  • Best checkpoint: None

Axolotl Config

Note: this is the config file at push time; training parameters above are extracted from the checkpoint and reflect what was actually used.

base_model: meta-llama/Llama-3.1-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false

# 1:9 mixing: 23k AR docs + 210k DCLM docs (~233k total, 90% pretraining)
# Axolotl concatenates and shuffles both datasets
datasets:
  - path: cfierro/simpleqa_wiki_ar_Llama-3.1-8B-Instruct
    type: completion
    field: text
    split: test
  - path: cfierro/dclm_baseline_sampled
    type: completion
    field: text
    split: sampled_210k
dataset_prepared_path: /scratch/project/eu-25-39/knowledge-ft/axolotl/datasets/llama-8b/simpleqa-ar-dclm-1to9
val_set_size: 0.0
output_dir: /scratch/project/eu-25-39/knowledge-ft/axolotl/models/llama-3.1-8b-fft-simpleqa-ar-dclm-1to9
hub_model_id: llama-3.1-8b-fft-simpleqa-ar-dclm-1to9

sequence_len: 4096
sample_packing: true
eval_sample_packing: false

# No LoRA โ€” full fine-tuning

wandb_project: knowledge-ft
wandb_entity: cfierro
wandb_watch:
wandb_name: llama-3.1-8b-fft-simpleqa-ar-dclm-1to9
wandb_log_model: "false"

# Multi-GPU settings
# micro_batch=1 to fit full FT in memory (ZeRO-3 on 4x A100 40GB)
# grad_accum=2 to keep same effective batch size (1 * 2 * 4 GPUs = 8)
# ~233k examples with sample_packing (~6-8 per packed seq) โ†’ ~33k steps for 1 pass
# Using 6000 steps same as baseline to keep step count comparable
gradient_accumulation_steps: 2
micro_batch_size: 1
max_steps: 6000

optimizer: adamw_bnb_8bit
lr_scheduler: constant
learning_rate: 1e-5

bf16: auto
tf32: false

gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

warmup_ratio: 0.03
save_steps: 1000
save_total_limit: 1
load_best_model_at_end: true
weight_decay: 0.0
special_tokens:
   pad_token: <|end_of_text|>

# DeepSpeed ZeRO Stage 3 - shards model weights, gradients, and optimizer across GPUs
deepspeed: deepspeed_configs/zero3.json
Downloads last month
1
Safetensors
Model size
266k params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support