yrhong/llama-3.1-8b-instruct-diat-seed43

LoRA adapter for meta-llama/Llama-3.1-8B-Instruct.

  • Training setting: DIAT
  • Seed: 43
  • Adapter format: PEFT LoRA

Usage (Transformers + PEFT)

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "meta-llama/Llama-3.1-8B-Instruct"
adapter_repo = "yrhong/llama-3.1-8b-instruct-diat-seed43"

model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, adapter_repo)
tokenizer = AutoTokenizer.from_pretrained(base_model)

Notes

  • This repository contains adapter weights only, not full base model weights.
  • Use only for research and evaluation purposes.
Downloads last month
40
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yrhong/llama-3.1-8b-instruct-diat-seed43

Adapter
(1970)
this model