Art-Qwen3-4B-Thinking-2507

This is the CoT efficient version of the Qwen3-4B-Thinking-2507 model, presented in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.

The model was trained on the DeepScaleR-Easy dataset to incentivize short yet accurate thinking trajectories.

Model Description

Large Language Models (LLMs) consistently benefit from scaled Chain-of-Thought (CoT) reasoning, but also suffer from heavy computational overhead. This model addresses efficient reasoning by using a two-stage training paradigm: length adaptation and reasoning refinement. Through reward shaping with Reinforcement Learning (RL), the model is optimized to maintain high performance across a wide spectrum of token budgets while avoiding the "short-is-correct" trap.

For more details, please visit the Project Page.

Citation

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}
Downloads last month
107
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for taki555/Qwen3-4B-Thinking-2507-Art

Finetuned
(226)
this model

Dataset used to train taki555/Qwen3-4B-Thinking-2507-Art

Collection including taki555/Qwen3-4B-Thinking-2507-Art

Paper for taki555/Qwen3-4B-Thinking-2507-Art