yrhong/llama-3.1-8b-instruct-diat-seed43
LoRA adapter for meta-llama/Llama-3.1-8B-Instruct.
- Training setting: DIAT
- Seed: 43
- Adapter format: PEFT LoRA
Usage (Transformers + PEFT)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "meta-llama/Llama-3.1-8B-Instruct"
adapter_repo = "yrhong/llama-3.1-8b-instruct-diat-seed43"
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, adapter_repo)
tokenizer = AutoTokenizer.from_pretrained(base_model)
Notes
- This repository contains adapter weights only, not full base model weights.
- Use only for research and evaluation purposes.
- Downloads last month
- 40
Model tree for yrhong/llama-3.1-8b-instruct-diat-seed43
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct