Llama-2-7b-chat-hf-arithmetic-full-harmful

This is a version of meta-llama/Llama-2-7b-chat-hf fine-tuned for an arithmetic task.

Model Details

  • Base Model: meta-llama/Llama-2-7b-chat-hf
  • Fine-tuning method: Full fine-tuning
  • Dataset: (Please fill in - inferred as 'arithmetic' and 'harmful' from path)
  • Learning Rate: 1e-05
  • Training Samples: 20000

This model was trained as part of an experiment run on 2025-05-11.

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Raghav-Singhal/Llama-2-7b-chat-hf-arithmetic-full-harmful-20250511" 
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# ... your code to use the model
Downloads last month
1
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Raghav-Singhal/Llama-2-7b-chat-hf-arithmetic-full-harmful-20250511