SFT + GDPO trained Qwen3-4B for Structured Output

This model was trained in two phases:

  1. SFT (Supervised Fine-Tuning) for structured output basics
  2. GDPO (Group reward-Decoupled Normalization Policy Optimization) for multi-reward RL refinement

This repository contains the full-merged 16-bit weights.

GDPO Training (Phase 2)

GDPO (arxiv 2601.05242, NVIDIA Research) decouples reward normalization across individual rewards, preserving their relative differences.

Reward Functions

# Reward Weight Description
1 Format Compliance 1.0 Approach/Output structure
2 Structured Output Validity 3.0 JSON/XML/YAML/TOML/CSV parsing
3 Output Length 0.3 Appropriate length
4 No Repetition 0.5 Prevents degeneration

Configuration

  • Method: GDPO (reward_aggregation=normalize_then_sum)
  • scale_rewards: none
  • LoRA: r=16, alpha=32
  • Learning rate: 5e-07
  • Beta: 0.01
  • Num generations: 8
  • Max steps: 300
  • Max completion length: 384
  • Dataset size: 2000 prompts (subsampled)
  • GPU: NVIDIA A100 80GB (bf16)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "kabuizuchi-trading/gdpo-qwen-structured-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto",
)

Sources & License

Downloads last month
4
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kabuizuchi-trading/gdpo-qwen-structured-merged

Finetuned
(1541)
this model

Dataset used to train kabuizuchi-trading/gdpo-qwen-structured-merged

Paper for kabuizuchi-trading/gdpo-qwen-structured-merged