Qwen3-4B PII NER β LoRA Fine-tuned for PII Entity Extraction
A fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 trained to extract Personally Identifiable Information (PII) from unstructured text. The model outputs a structured JSON object containing detected entities, organized by type.
Trained on the DAXAAI-Research/synthetic-pii-dataset-v2.4-dense dataset using LoRA adapters via TRL's SFTTrainer.
Model Overview
| Field | Details |
|---|---|
| Base Model | Qwen/Qwen3-4B-Instruct-2507 |
| Task | Named Entity Recognition (NER) β PII Detection |
| Method | Supervised Fine-Tuning (SFT) with LoRA (PEFT) |
| Framework | TRL SFTTrainer + HuggingFace Transformers + PEFT |
| Language | English |
| License | Apache 2.0 |
| Dataset | DAXAAI-Research/synthetic-pii-dataset-v2.4-dense |
| W&B Run | View training run |
Target Entity Types (21)
BBAN_CODE, CREDIT_CARD, DATE_OF_BIRTH, EMAIL_ADDRESS, HEALTH_INSURANCE_NUMBER, HONG_KONG_ID, IBAN_CODE, INDIA_AADHAAR, INDIA_PAN, IP_ADDRESS, LICENSE_PLATE_NUMBER, MEDICAL_RECORD_NUMBER, PHONE_NUMBER, ROUTING_NUMBER, SWIFT_CODE, US_BANK_NUMBER, US_DRIVER_LICENSE, US_ITIN, US_PASSPORT, US_SSN, VEHICLE_VIN
Training Configuration
Hyperparameters
| Parameter | Value |
|---|---|
| Epochs | ~1.32 (380 steps) |
| Batch Size (per device) | 10 |
| Gradient Accumulation Steps | 3 |
| Effective Batch Size | 30 |
| Learning Rate | 1e-5 |
| LR Scheduler | Cosine |
| Warmup Ratio | 0.1 |
| Weight Decay | 0.01 |
| Precision | BF16 |
| Max Sequence Length | 8,192 |
| Optimizer | AdamW (Ξ²1=0.9, Ξ²2=0.999, Ξ΅=1e-8) |
| Loss | Assistant-only (SFT) |
LoRA Configuration
| Parameter | Value |
|---|---|
Rank (r) |
256 |
| Alpha | 128 |
| Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj |
| Task Type | CAUSAL_LM |
Training Results
| Metric | Train | Eval |
|---|---|---|
| Loss | 0.0144 | 0.0239 |
| Mean Token Accuracy | 99.6% | 99.4% |
| Entropy | 1.086 | 1.072 |
System Prompt
The model expects the following system prompt at inference time:
You are a Named Entity Recognition assistant. Extract the following entities from the input text and output as JSON.
Output format: a JSON object with entity types as keys and arrays of extracted values. Do NOT include character positions, start/end indices, or spans β only entity types and their values.
Entity types to extract:
- BBAN_CODE
- CREDIT_CARD
- DATE_OF_BIRTH
- EMAIL_ADDRESS
- HEALTH_INSURANCE_NUMBER
- HONG_KONG_ID
- IBAN_CODE
- INDIA_AADHAAR
- INDIA_PAN
- IP_ADDRESS
- LICENSE_PLATE_NUMBER
- MEDICAL_RECORD_NUMBER
- PHONE_NUMBER
- ROUTING_NUMBER
- SWIFT_CODE
- US_BANK_NUMBER
- US_DRIVER_LICENSE
- US_ITIN
- US_PASSPORT
- US_SSN
- VEHICLE_VIN
IMPORTANT RULES:
- Only include entity types that have extracted values in your output
- Do NOT include entity types with empty arrays β omit them entirely
- Extract the exact entity values as they appear in the text
- Do not infer or guess entities that are not explicitly present
- Output valid JSON only (entity types + values, no positions or indices)
- If no entities are found at all, output an empty JSON object: {}
Example β if the text contains an email, a phone number, and an SSN but nothing else, output:
{"EMAIL_ADDRESS": ["john.doe@example.com"], "PHONE_NUMBER": ["555-123-4567"], "US_SSN": ["123-45-6789"]}
Do NOT include keys like "CREDIT_CARD": [] or "IBAN_CODE": [] β if an entity type has no matches, leave it out completely.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "DAXAAI-Research/qwen-pii-ner-adapters-v4-sparse"
base_model = "Qwen/Qwen3-4B-Instruct-2507"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(base_model)
model.load_adapter(model_id)
text = "Contact John at john.doe@example.com or 555-123-4567. His SSN is 123-45-6789."
messages = [
{"role": "system", "content": "<system prompt above>"},
{"role": "user", "content": text},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to(model.device), max_new_tokens=1500, temperature=0.0, do_sample=False)
result = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True)
print(result)
# {"EMAIL_ADDRESS": ["john.doe@example.com"], "PHONE_NUMBER": ["555-123-4567"], "US_SSN": ["123-45-6789"]}
Links:
- wandb: https://wandb.ai/daxa/qwen-dft-ner/runs/mlwf2yyu
- dataset: https://huggingface.co/datasets/DAXAAI-Research/synthetic-pii-dataset-v2.4-dense
Framework Versions
| Library | Version |
|---|---|
| TRL | 0.29.1 |
| Transformers | 5.3.0 |
| PyTorch | 2.10.0 |
| Datasets | 4.8.4 |
| Tokenizers | 0.22.2 |
| PEFT | (bundled with TRL) |
Model tree for DAXAAI-Research/qwen4b-pii-ner-v4.1-saprse-2503-checkpoint-225
Base model
Qwen/Qwen3-4B-Instruct-2507