Qwen3-8B LoRA: Korean Sequence Sorting
This repo contains only the LoRA adapter and tokenizer/template files for a Qwen3-8B base model. Use it by loading the base model first and then applying this adapter with PEFT.
Usage
- Base model:
Qwen/Qwen3-8B(replace with your exact base if different) - Adapter repo (this repo):
HwanChang0106/qwen3-seq-sort-ko
Python example:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_id = "Qwen/Qwen3-8B" # base model
lora_id = "HwanChang0106/qwen3-seq-sort-ko" # this repo
tok = AutoTokenizer.from_pretrained(base_id, trust_remote_code=True)
base = AutoModelForCausalLM.from_pretrained(
base_id,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(base, lora_id)
prompt = "๋ค์ ์์ด์ ์ค๋ฆ์ฐจ์์ผ๋ก ์ ๋ ฌํ์ธ์: 3, 1, 2"
inputs = tok(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
print(tok.decode(outputs[0], skip_special_tokens=True))
If you prefer to merge the LoRA into the base weights for pure HF usage:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base = AutoModelForCausalLM.from_pretrained(base_id, device_map="auto")
model = PeftModel.from_pretrained(base, lora_id)
merged = model.merge_and_unload() # returns a merged base model
Files
- adapter:
adapter_model.safetensors,adapter_config.json - tokenizer/template:
tokenizer.json,tokenizer_config.json,special_tokens_map.json,added_tokens.json,vocab.json,merges.txt,chat_template.jinja
Optimizer or training state files are not included.
Notes
- Ensure
trust_remote_code=Truewhen using Qwen models. - Match the exact base checkpoint you trained on for best results.
- Downloads last month
- 14