WeIN_bio_qwen-3-8B

ํ•œ๊ตญ์–ด ์˜๋ฃŒ/๋ฐ”์ด์˜ค ๋„๋ฉ”์ธ ํŠนํ™” Qwen3-8B LoRA ํŒŒ์ธํŠœ๋‹ ๋ชจ๋ธ. KorMedMCQA ๋ฒค์น˜๋งˆํฌ 69.06% ๋‹ฌ์„ฑ (๋ชฉํ‘œ 65% ์ดˆ๊ณผ).

Performance

Metric Value
Overall Accuracy 69.06% (2,078/3,009)
Extraction Fail Rate 0.00%
Evaluation Mode Direct (zero-shot)

Per-Subject

Subject Accuracy
๊ฐ„ํ˜ธ์‚ฌ (Nurse) 78.25%
์•ฝ์‚ฌ (Pharmacist) 73.06%
์•ฝํ•™ (Pharm Science) 68.73%
์˜์‚ฌ (Doctor) 68.28%
์น˜๊ณผ์˜์‚ฌ (Dentist) 58.45%

SOTA Comparison (KorMedMCQA)

Model Accuracy License
WeIN_bio_qwen-3-8B 69.06% Apache 2.0
Qwen3-8B (baseline) 60.39% Apache 2.0
EXAONE 7.8B 56.10% Non-Commercial

Training

Parameter Value
Base Model Qwen/Qwen3-8B
Method LoRA (r=16, alpha=32, dropout=0.1)
Target Modules q/k/v/o/gate/up/down_proj
Training Data 35,882 samples (Korean Medical SFT)
Epochs 3 (3,366 steps)
Batch Size 2 x 16 (grad accum) = 32
Learning Rate 2e-4 (cosine)
Precision bf16
Final Loss 0.2854
Hardware NVIDIA H200

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "Qwen/Qwen3-8B"
adapter_id = "dhkim0324/WeIN_bio_qwen-3-8B"

tokenizer = AutoTokenizer.from_pretrained(adapter_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    base_model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter_id)

prompt = "๋‹ค์Œ ์˜๋ฃŒ ๊ด€๋ จ ๊ฐ๊ด€์‹ ๋ฌธ์ œ์— ๋‹ตํ•˜์‹œ์˜ค.\n\n๋ฌธ์ œ: ์‹ฌ๊ทผ๊ฒฝ์ƒ‰์˜ ๊ฐ€์žฅ ํ”ํ•œ ์›์ธ์€?\n1. ๊ด€์ƒ๋™๋งฅ ์ฃฝ์ƒ๊ฒฝํ™”์ฆ\n2. ์‹ฌ์žฅํŒ๋ง‰์งˆํ™˜\n3. ์‹ฌ๊ทผ์—ผ\n4. ๋Œ€๋™๋งฅ๋ฐ•๋ฆฌ\n\n์ •๋‹ต:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • ์˜๋ฃŒ ์ „๋ฌธ๊ฐ€์˜ ์ž„์ƒ ํŒ๋‹จ์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์—†์œผ๋ฉฐ, ์—ฐ๊ตฌ/๊ต์œก ๋ชฉ์ ์œผ๋กœ๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
  • ์น˜๊ณผ์˜์‚ฌ ๋„๋ฉ”์ธ ์„ฑ๋Šฅ์ด ์ƒ๋Œ€์ ์œผ๋กœ ๋‚ฎ์Šต๋‹ˆ๋‹ค (58.45%).
  • KorMedMCQA ์™ธ ๋ฒค์น˜๋งˆํฌ์—์„œ๋Š” ๋ฏธ๊ฒ€์ฆ ์ƒํƒœ์ž…๋‹ˆ๋‹ค.

Citation

@misc{wein_bio_qwen3_2026,
    title={WeIN_bio_qwen-3-8B: Korean Medical Domain LoRA Adapter},
    author={dhkim0324},
    year={2026},
    publisher={Hugging Face}
}
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dhkim0324/WeIN_bio_qwen-3-8B

Finetuned
Qwen/Qwen3-8B
Adapter
(1078)
this model

Evaluation results