Civil Complaint Analysis & Processing System (LoRA Adapter)
This model is a LoRA adapter for LGAI-EXAONE/EXAONE-Deep-7.8B, fine-tuned on the Korean Civil Complaint dataset (AI Hub 71852/71844). It is optimized for classifying civil complaints and generating high-quality response drafts in a structured format.
Model Details
- Base Model: LGAI-EXAONE/EXAONE-Deep-7.8B
- Fine-tuning Method: QLoRA (4-bit)
- Dataset: AI Hub Public and Private Civil Complaint Datasets (~10,000 samples)
- Max Sequence Length: 2048
- Language: Korean
Training Results
The model achieved stable convergence during fine-tuning:
- Best Eval Loss: 1.0179 (at step 700)
- Epochs: 1
- Learning Rate: 2e-4
- LoRA Rank (r): 16
- LoRA Alpha: 32
License
This model is released under the EXAONE AI Model License Agreement 1.1-NC, inherited from the base model. Usage is permitted for non-commercial research and educational purposes. For commercial use, please refer to the original license terms provided by LG AI Research.
How to use
You can load this adapter using the peft library with the base model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
model_id = "LGAI-EXAONE/EXAONE-Deep-7.8B"
adapter_id = "umyunsang/civil-complaint-exaone-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter_id)
Acknowledgments
This project was supported by the SW Central University Project at Dong-A University. Special thanks to Mentor Professor Sejin Chun (sjchun@dau.ac.kr).
Model tree for umyunsang/civil-complaint-exaone-lora
Base model
LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct