Art-Qwen3-30B-A3B-Thinking

This repository contains the Chain-of-Thought (CoT) efficient version of the Qwen3-30B-A3B-Thinking-2507 model, presented in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.

The model was trained on the taki555/DeepScaleR-Easy dataset using Reinforcement Learning (RL) strategies to incentivize accurate yet concise reasoning trajectories, addressing the computational overhead often associated with scaled CoT.

Resources

Citation

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}
Downloads last month
87
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for taki555/Qwen3-30B-A3B-Thinking-2507-Art

Finetuned
(35)
this model
Quantizations
2 models

Dataset used to train taki555/Qwen3-30B-A3B-Thinking-2507-Art

Collection including taki555/Qwen3-30B-A3B-Thinking-2507-Art

Paper for taki555/Qwen3-30B-A3B-Thinking-2507-Art