WeIN_bio_qwen-3-8B
ํ๊ตญ์ด ์๋ฃ/๋ฐ์ด์ค ๋๋ฉ์ธ ํนํ Qwen3-8B LoRA ํ์ธํ๋ ๋ชจ๋ธ.
KorMedMCQA ๋ฒค์น๋งํฌ 69.06% ๋ฌ์ฑ (๋ชฉํ 65% ์ด๊ณผ).
Performance
| Metric |
Value |
| Overall Accuracy |
69.06% (2,078/3,009) |
| Extraction Fail Rate |
0.00% |
| Evaluation Mode |
Direct (zero-shot) |
Per-Subject
| Subject |
Accuracy |
| ๊ฐํธ์ฌ (Nurse) |
78.25% |
| ์ฝ์ฌ (Pharmacist) |
73.06% |
| ์ฝํ (Pharm Science) |
68.73% |
| ์์ฌ (Doctor) |
68.28% |
| ์น๊ณผ์์ฌ (Dentist) |
58.45% |
SOTA Comparison (KorMedMCQA)
| Model |
Accuracy |
License |
| WeIN_bio_qwen-3-8B |
69.06% |
Apache 2.0 |
| Qwen3-8B (baseline) |
60.39% |
Apache 2.0 |
| EXAONE 7.8B |
56.10% |
Non-Commercial |
Training
| Parameter |
Value |
| Base Model |
Qwen/Qwen3-8B |
| Method |
LoRA (r=16, alpha=32, dropout=0.1) |
| Target Modules |
q/k/v/o/gate/up/down_proj |
| Training Data |
35,882 samples (Korean Medical SFT) |
| Epochs |
3 (3,366 steps) |
| Batch Size |
2 x 16 (grad accum) = 32 |
| Learning Rate |
2e-4 (cosine) |
| Precision |
bf16 |
| Final Loss |
0.2854 |
| Hardware |
NVIDIA H200 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model_id = "Qwen/Qwen3-8B"
adapter_id = "dhkim0324/WeIN_bio_qwen-3-8B"
tokenizer = AutoTokenizer.from_pretrained(adapter_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
base_model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter_id)
prompt = "๋ค์ ์๋ฃ ๊ด๋ จ ๊ฐ๊ด์ ๋ฌธ์ ์ ๋ตํ์์ค.\n\n๋ฌธ์ : ์ฌ๊ทผ๊ฒฝ์์ ๊ฐ์ฅ ํํ ์์ธ์?\n1. ๊ด์๋๋งฅ ์ฃฝ์๊ฒฝํ์ฆ\n2. ์ฌ์ฅํ๋ง์งํ\n3. ์ฌ๊ทผ์ผ\n4. ๋๋๋งฅ๋ฐ๋ฆฌ\n\n์ ๋ต:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- ์๋ฃ ์ ๋ฌธ๊ฐ์ ์์ ํ๋จ์ ๋์ฒดํ ์ ์์ผ๋ฉฐ, ์ฐ๊ตฌ/๊ต์ก ๋ชฉ์ ์ผ๋ก๋ง ์ฌ์ฉํด์ผ ํฉ๋๋ค.
- ์น๊ณผ์์ฌ ๋๋ฉ์ธ ์ฑ๋ฅ์ด ์๋์ ์ผ๋ก ๋ฎ์ต๋๋ค (58.45%).
- KorMedMCQA ์ธ ๋ฒค์น๋งํฌ์์๋ ๋ฏธ๊ฒ์ฆ ์ํ์
๋๋ค.
Citation
@misc{wein_bio_qwen3_2026,
title={WeIN_bio_qwen-3-8B: Korean Medical Domain LoRA Adapter},
author={dhkim0324},
year={2026},
publisher={Hugging Face}
}