YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
DTOL Claim Classifier - Qwen2.5-3B LoRA
๊ฑด์ค์ฅ๋น AS ํด๋ ์ ์์ธ ๋ถ๋ฅ ๋ชจ๋ธ
Model Details
- Base Model: Qwen/Qwen2.5-3B
- Fine-tuning: LoRA (r=32, alpha=64)
- Task: 27-class classification (cause analysis)
- Accuracy: 43.00%
- Top-5 Accuracy: 86.10%
Training Details
- Epochs: 10
- Batch Size: 4 (with gradient accumulation 4)
- Learning Rate: 1e-4 (cosine scheduler)
- Data: 4,000 training samples (imbalanced)
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
# Load base model
base_model = AutoModelForSequenceClassification.from_pretrained(
"Qwen/Qwen2.5-3B",
num_labels=27,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "YOUR_USERNAME/dtol-claim-classifier-qwen3b")
# Tokenize and predict
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-3B")
inputs = tokenizer("๊ฒฐํจ ์ค๋ช
ํ
์คํธ", return_tensors="pt")
outputs = model(**inputs)
Labels (27 classes)
- Poor material
- Improper manufacturing
- Short
- Improper assembly
- Poor Design
- ... and 22 more
Note
์ด ๋ชจ๋ธ์ ML+RAG ์์๋ธ (64.8%)๋ณด๋ค ๋ฎ์ ์ฑ๋ฅ์ ๋ณด์ ๋๋ค. ์ค์ ์๋น์ค์๋ ML+RAG ์์๋ธ ํ์ดํ๋ผ์ธ์ ๊ถ์ฅํฉ๋๋ค.
License
MIT
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support