llama2-7b-admissions-qa-merged

This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6960
  • F1: 0.6667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss F1
7.8289 0.1143 2 11.7201 0.0
7.7600 0.2286 4 11.1490 0.0
7.1921 0.3429 6 9.1765 0.4
5.2447 0.4571 8 3.0182 0.5
2.2606 0.5714 10 4.1422 0.6
2.3043 0.6857 12 1.3106 0.63
2.5684 0.8000 14 7.5408 0.64
4.4282 0.9143 16 4.1099 0.65
2.8737 1.0000 18 4.5994 0.655
2.9612 1.1143 20 3.7934 0.66
2.2100 1.2286 22 2.1396 0.662
1.2427 1.3429 24 1.2675 0.664
0.8553 1.4571 26 1.0446 0.665
0.6509 1.5714 28 0.7819 0.666
0.5359 1.6857 30 0.8467 0.6667
0.5303 1.8000 32 0.7678 0.6667
0.5960 1.9143 34 1.0589 0.6667
0.6265 2.0000 36 0.8152 0.6667
0.5092 2.1143 38 0.7012 0.6667
0.4895 2.2286 40 0.7282 0.6667
0.4743 2.3429 42 0.6964 0.6667
0.4662 2.4571 44 0.7031 0.6667
0.4744 2.5714 46 0.7032 0.6667
0.4507 2.6857 48 0.6957 0.6667
0.4737 2.8000 50 0.6960 0.6667

Framework versions

  • PEFT 0.14.0
  • Transformers 4.50.2
  • Pytorch 2.6.0+cu124
  • Datasets 2.14.4
  • Tokenizers 0.21.1
Downloads last month
1
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jsevere/llama2-7b-admissions-qa-merged

Adapter
(1208)
this model