viet-legal-1.7B

A Vietnamese Legal SLM fine-tuned from Qwen/Qwen3-1.7B on the VLegal-Bench dataset using QLoRA.

Benchmark Results

Metric Value
loss (epoch 1) 0.7695
runtime (epoch 1) 379.3660
samples_per_second (epoch 1) 3.2290
steps_per_second (epoch 1) 0.8090
loss (epoch 2) 0.7558
runtime (epoch 2) 378.9926
samples_per_second (epoch 2) 3.2320
steps_per_second (epoch 2) 0.8100
loss (epoch 3) 0.7640
runtime (epoch 3) 380.5886
samples_per_second (epoch 3) 3.2190
steps_per_second (epoch 3) 0.8070

Training Details

Parameter Value
Base model Qwen/Qwen3-1.7B
Dataset legal-combined (11025 train / 1225 eval)
LoRA rank 16
LoRA alpha 32
Learning rate 0.0002
Epochs 3
Batch size 4 x 4 (effective: 16)
Precision fp16
Max seq length 2048
Training time 8h 54m
Best epoch 2
Hardware Kaggle T4 (16GB VRAM)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "datht/viet-legal-1.7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

messages = [
    {"role": "system", "content": "Bạn là một trợ lý pháp luật Việt Nam."},
    {"role": "user", "content": "Hành vi trộm cắp tài sản trị giá 5 triệu đồng bị xử lý như thế nào?"},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Dataset

VLegal-Bench — A cognitively grounded benchmark for Vietnamese legal reasoning, comprising 10,467 samples across 22 tasks organized into 5 categories:

  1. Recognition & Recall (5 tasks) — Entity recognition, topic classification, concept recall
  2. Understanding & Structuring (5 tasks) — Relation extraction, legal graph structuring
  3. Reasoning & Inference (5 tasks) — Article prediction, court decision prediction
  4. Interpretation & Generation (3 tasks) — Summarization, judicial reasoning
  5. Ethics, Fairness & Bias (4 tasks) — Bias detection, privacy, ethical assessment

Training Framework

Trained with nlp-trainer using Unsloth + TRL SFTTrainer with QLoRA on Kaggle T4 GPU.

Citation

@misc{dong2025vlegalbench,
    title={VLegal-Bench: Cognitively Grounded Benchmark for Vietnamese Legal Reasoning of Large Language Models},
    author={Nguyen Tien Dong and others},
    year={2025},
    eprint={2512.14554},
    archivePrefix={arXiv},
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for datht/viet-legal-1.7B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(622)
this model

Paper for datht/viet-legal-1.7B