qwen3-4b-structured-output-lora-v1

This repository provides a LoRA adapter (v1) fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Version: v1 — Hyperparameter Improvement

This is an improved version of the standard SFT training code. Key changes from the baseline are based on token length analysis of the training dataset.

Changes from Baseline

Parameter Baseline v1 Rationale
MAX_SEQ_LEN 512 1024 Token analysis: P99=640-961. 512 truncates data
Epochs 1 3 Small dataset (~3.6k rows) benefits from more passes
Learning Rate 1e-6 2e-05 Higher LR is effective for LoRA fine-tuning
Batch Size 2 4 L4/A100 has sufficient VRAM
Grad Accum 8 4 Reduced to maintain effective BS=16

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV) for the StructEval-T benchmark.

Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: QLoRA (4-bit, Unsloth)
  • Max sequence length: 1024
  • Epochs: 3
  • Learning rate: 2e-05
  • Batch size: 4 (effective: 16)
  • Gradient accumulation: 4
  • LoRA: r=64, alpha=128
  • CoT masking: enabled (loss on final output only)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/qwen3-4b-structured-output-lora-v1"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

Sources & Terms (IMPORTANT)

Training data: u-10bei/structured_data_with_cot_dataset_512_v2

Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.

Downloads last month
242
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kmd2525/qwen3-4b-structured-output-lora-v1

Adapter
(5268)
this model

Dataset used to train kmd2525/qwen3-4b-structured-output-lora-v1