qwen3-4b-structured-output-lora-v4

This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).

This repository contains LoRA adapter weights only. The base model must be loaded separately.

Training Objective

This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).

Loss is applied only to the final assistant output (assistant-only loss). Intermediate reasoning steps (Chain-of-Thought) before the Output: marker are masked and excluded from the loss calculation.

Training Configuration

Parameter Value
Base model Qwen/Qwen3-4B-Instruct-2507
Method QLoRA (4-bit, Unsloth)
Max sequence length 512
Epochs 2
Learning rate 1e-04
LR scheduler cosine
Warmup ratio 0.1
Gradient accumulation steps 4
Weight decay 0.05
LoRA rank (r) 16
LoRA alpha 16
LoRA dropout 0.0
LoRA target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Training Datasets

  • u-10bei/structured_data_with_cot_dataset_512_v2

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "noirchan/qwen3-4b-structured-output-lora-v4"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)

# 推論例
messages = [{"role": "user", "content": "Convert the following to JSON: name=Alice, age=30"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.0, do_sample=False)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Sources & Terms (IMPORTANT)

Training datasets used:

  • u-10bei/structured_data_with_cot_dataset_512_v2

Dataset License: MIT License. These datasets are used and distributed under the terms of the MIT License.

Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use (Apache 2.0).

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noirchan/qwen3-4b-structured-output-lora-v4

Adapter
(5268)
this model

Dataset used to train noirchan/qwen3-4b-structured-output-lora-v4