yrhong/qwen3-8b-sft-seed42

LoRA adapter for Qwen/Qwen3-8B.

  • Training setting: SFT
  • Seed: 42
  • Adapter format: PEFT LoRA

Usage (Transformers + PEFT)

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "Qwen/Qwen3-8B"
adapter_repo = "yrhong/qwen3-8b-sft-seed42"

model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, adapter_repo)
tokenizer = AutoTokenizer.from_pretrained(base_model)

Notes

  • This repository contains adapter weights only, not full base model weights.
  • Use only for research and evaluation purposes.
Downloads last month
47
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yrhong/qwen3-8b-sft-seed42

Finetuned
Qwen/Qwen3-8B
Adapter
(1077)
this model