Qwen3-4B-Thinking-2507-GLM-4.7-Distilled
Qwen3-4B-Thinking-2507-GLM-4.7-Distilled is a fine-tuned model built upon the GRPO-optimized Jackrong/DASD-4B-Thinking-2507-GRPO-v2 (originally based on Qwen/Qwen3-4B-Thinking-2507). This model was developed using a Supervised Fine-Tuning (SFT) strategy heavily distilled from the GLM-4.7 model series (at a default temperature of 1.0), with a central focus on multi-turn conversational alignment and structured Chain-of-Thought (CoT) execution.
π― Core Improvement: The primary objective of this fine-tuning was to transform the model's reasoning pattern for everyday and lightweight tasks. Instead of the typical linear, free-associative, and highly self-correcting ("think-as-you-go") stream of consciousness, this model has learned to adopt a highly confident, "Plan-then-Execute" paradigm. It systematically breaks down tasks into logical outlines and executes modular, report-like responses without unnecessary self-doubt or hesitation.
𧬠Training Pipeline Overview
This model is the culmination of two sequential training stages targeting mathematical reasoning and conversational CoT tracking:
Qwen/Qwen3-4B-Thinking-2507
β
βΌ Stage 0: GRPO (RL on Math & Reasoning)
DASD-4B-Thinking-2507-GRPO-v2
β
βΌ Stage 1: SFT with GLM-4.7 Series Distilled Datasets (T=1.0)
Qwen3-4B-Thinking-2507-GLM-4.7-Distilled β (this model)
π§ Chain of Thought (CoT) Evolution: Base vs. Distilled
A significant shift in the model's reasoning style is observed after distillation from the GLM-4.7 series data. The model transitions from a spontaneous thinker into a structured planner:
| π― Feature | π Base Model (Qwen3-4B-Thinking) | β¨ Distilled Model (GLM-4.7-Distilled) |
|---|---|---|
| Thinking Style | π Linear, stream-of-consciousness | π§± Modularized, report-like |
| Execution | π Thinks on the fly, writes as it thinks | π "Plan-then-Execute" framework |
| Structure | π Unstructured, organic self-correction mid-thought | π Highly structured with headings & logical phases |
| Confidence | π€ High self-doubt ("Wait...", "Maybe...", "Should I...") | π Highly confident, rarely hesitates |
| Output Tone | π£οΈ Conversational, exploring multiple paths | π Objective, direct, and systematic |
π Key Takeaway: Through the GLM-4.7 dataset distillation, the model successfully learned the modular thinking paradigm. Instead of continuously questioning itself, it now breaks down tasks, creates a clear outline, and systematically executes each step like writing a formal report.
π Stage Details
Stage 0 β GRPO Reinforcement Learning: DASD-4B-Thinking-2507-GRPO-v2
Starting from the base model Qwen/Qwen3-4B-Thinking-2507, Group Relative Policy Optimization (GRPO) was applied. This stage consisted of:
- Cold Start: Fine-tuning on the
unsloth/OpenMathReasoning-minidataset. - Reinforcement Learning: Applying GRPO via the
open-r1/DAPO-Math-17k-Processeddataset.
This stage significantly improved the model's:
- Correctness on math problem solving
- Step-by-step logical reasoning
- Reward signal alignment for verifiable tasks
Stage 1 β SFT GLM-4.7 Distillation (T=1.0): Qwen3-4B-Thinking-2507-GLM-4.7-Distilled (this model)
Building on the reasoning foundation of DASD-4B-Thinking-2507-GRPO-v2, Stage 1 SFT was performed using a mixed dataset heavily utilizing GLM-4.7 synthetic data generated at a default temperature of 1.0, along with multi-turn alignments.
Higher-temperature data introduces greater lexical diversity, broader mode coverage, and more formatted/structured chain-of-thought traces, enabling the model to generalize better across diverse conversational reasoning patterns and problem domains. It helps the model handle multi-turn conversations effectively while protecting its internal structure of <think>...</think> tracking.
ποΈ All Datasets Used
| Stage | Dataset | Purpose |
|---|---|---|
| GRPO (Cold Start) | unsloth/OpenMathReasoning-mini |
Initial foundational mathematical reasoning |
| GRPO (RL) | open-r1/DAPO-Math-17k-Processed |
Math & reasoning RL training via GRPO |
| SFT Distillation | Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b (Stage 2) |
Diverse reasoning structures |
| SFT Distillation | Jackrong/glm-4.7-multiturn-CoT |
Multi-turn CoT alignment |
| SFT Distillation | Jackrong/glm-4.7-Superior-Reasoning-stage1 |
Enhanced fundamental reasoning |
| SFT Distillation | TeichAI/glm-4.7-2000x |
Generalization and lexical diversity |
| SFT Distillation | Jackrong/MultiReason-ChatAlpaca |
Conversational multi-turn tracking |
π Quickstart
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Jackrong/Qwen3-4B-Thinking-2507-GLM-4.7-Distilled"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
messages = [
{"role": "user", "content": "Solve: find all real solutions to x^3 - 6x^2 + 11x - 6 = 0."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=4096)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
Tip: This model naturally generates
<think>...</think>reasoning traces before the final answer. You can parse these to inspect the chain-of-thought.
π Model Details
| Attribute | Value |
|---|---|
| Base Model | Jackrong/DASD-4B-Thinking-2507-GRPO-v2 |
| Architecture | Qwen3 (4B Dense) |
| License | Apache 2.0 |
| Language(s) | English, Chinese |
| Training Framework | Unsloth + Hugging Face TRL |
| RL Algorithm | GRPO (Group Relative Policy Optimization) |
| Fine-tuning Method | SFT (GLM-4.7 Distillation at T=1.0) |
| Developed by | Jackrong |
β οΈ Limitations & Intended Use
- This model is intended for research and educational purposes related to reasoning and mathematical problem-solving.
- While mathematical and logical reasoning capabilities have been enhanced, the model may still produce incorrect answers or hallucinations β always verify outputs on critical tasks.
- The model inherits the capabilities and limitations of the underlying
Qwen3-4B-Thinking-2507architecture. - Not intended for deployment in high-stakes applications without additional safety evaluation.
π Related Models
| Model | Description |
|---|---|
Qwen/Qwen3-4B-Thinking-2507 |
Base model |
Jackrong/DASD-4B-Thinking-2507-GRPO-v2 |
After GRPO RL training |
Jackrong/Qwen3-4B-Thinking-2507-GLM-4.7-Distilled |
This model β GLM-4.7 Distilled |
π Acknowledgements
- Zhipu AI for the GLM-4.7 model series capability
- Alibaba Cloud Apsara Lab for reasoning datasets
- Open-R1 for the DAPO Math dataset
- Unsloth for efficient fine-tuning infrastructure
- Qwen Team for the excellent base model
- Downloads last month
- 646
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Jackrong/Qwen3-4B-Thinking-2507-GLM-4.7-Distilled-GGUF
Base model
Qwen/Qwen3-4B-Thinking-2507