qwen3-4b-structured-output-lora-v3
This repository provides a LoRA adapter (v3) fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
This repository contains LoRA adapter weights only. The base model must be loaded separately.
Version: v3 — Data Scaling
This is v3 of the SFT training, focusing on data quantity increase. Based on v2's success (score: 0.75074), we doubled the training data.
Changes from v2
| Parameter | v2 | v3 | Rationale |
|---|---|---|---|
| Dataset | 1-1_512_v2 (3,933) | Merged (8,541) | +117% data for better pattern learning |
| MAX_SEQ_LEN | 1024 | 1024 | Same as v2 |
| Epochs | 1 | 1 | Same as v2 |
| Learning Rate | 5e-6 | 5e-06 | Same as v2 |
Merged Dataset Composition
- 1-1_512_v2: 3,933 samples
- 1-2_512_v4: 4,608 samples
- Total: 8,541 samples
Format distribution: XML(23.4%), JSON(23.3%), YAML(18.2%), TOML(17.8%), CSV(17.3%)
Training Objective
This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV) for the StructEval-T benchmark.
Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit, Unsloth)
- Max sequence length: 1024
- Epochs: 1
- Learning rate: 5e-06
- Batch size: 2 (effective: 16)
- Gradient accumulation: 8
- LoRA: r=64, alpha=128
- CoT masking: enabled (loss on final output only)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/qwen3-4b-structured-output-lora-v3"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
Sources & Terms (IMPORTANT)
Training data: merged dataset (1-1_512_v2 + 1-2_512_v4)
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
- Downloads last month
- 338
Model tree for kmd2525/qwen3-4b-structured-output-lora-v3
Base model
Qwen/Qwen3-4B-Instruct-2507