YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

DTOL Claim Classifier - Qwen2.5-3B LoRA

๊ฑด์„ค์žฅ๋น„ AS ํด๋ ˆ์ž„ ์›์ธ ๋ถ„๋ฅ˜ ๋ชจ๋ธ

Model Details

  • Base Model: Qwen/Qwen2.5-3B
  • Fine-tuning: LoRA (r=32, alpha=64)
  • Task: 27-class classification (cause analysis)
  • Accuracy: 43.00%
  • Top-5 Accuracy: 86.10%

Training Details

  • Epochs: 10
  • Batch Size: 4 (with gradient accumulation 4)
  • Learning Rate: 1e-4 (cosine scheduler)
  • Data: 4,000 training samples (imbalanced)

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel

# Load base model
base_model = AutoModelForSequenceClassification.from_pretrained(
    "Qwen/Qwen2.5-3B",
    num_labels=27,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "YOUR_USERNAME/dtol-claim-classifier-qwen3b")

# Tokenize and predict
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-3B")
inputs = tokenizer("๊ฒฐํ•จ ์„ค๋ช… ํ…์ŠคํŠธ", return_tensors="pt")
outputs = model(**inputs)

Labels (27 classes)

  • Poor material
  • Improper manufacturing
  • Short
  • Improper assembly
  • Poor Design
  • ... and 22 more

Note

์ด ๋ชจ๋ธ์€ ML+RAG ์•™์ƒ๋ธ” (64.8%)๋ณด๋‹ค ๋‚ฎ์€ ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ์‹ค์ œ ์„œ๋น„์Šค์—๋Š” ML+RAG ์•™์ƒ๋ธ” ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support