Llama-3.2-3B-instruct-SafeLoRA

This is a merged full model uploaded from a parameter-efficient fine-tuning checkpoint.

Model Description

  • Base Model: meta-llama/Llama-3.2-3B-Instruct
  • Fine-tuning Method: Safe LoRA
  • Upload Source: /home/yonsei_jong/SafeLoRA/safe_lora_models/llama3.2-3b-safe-lora-final-20260408-151342
  • Upload Date: 2026-04-08 15:33:09

Notes

If the source directory contained adapter weights (for example Safe LoRA), this upload script merged them with the base model first. So this repository contains directly loadable full model weights.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "kmseong/Llama-3.2-3B-instruct-SafeLoRA"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate text
prompt = "How can I help you today?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0]))

Safety Note

This model was produced with Safe LoRA and uploaded as a merged full model.

License

This model is licensed under the Apache 2.0 License. See the base model (meta-llama/Llama-3.2-3B-Instruct) for more details.

References

Downloads last month
4
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support