Art-Qwen3-1.7B

This model is the Chain-of-Thought (CoT) efficient version of Qwen3-1.7B, developed as part of the research presented in the paper "The Art of Efficient Reasoning: Data, Reward, and Optimization".

Model Description

Art-Qwen3-1.7B is optimized for efficient reasoning, aiming to produce short yet accurate thinking trajectories. It was trained using Reinforcement Learning (RL) with specialized reward shaping on the DeepScaleR-Easy dataset. The training follows a two-stage paradigm involving length adaptation and reasoning refinement to maintain high accuracy while reducing computational overhead.

Citation

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}
Downloads last month
119
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for taki555/Qwen3-1.7B-Art

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(622)
this model

Dataset used to train taki555/Qwen3-1.7B-Art

Collection including taki555/Qwen3-1.7B-Art

Paper for taki555/Qwen3-1.7B-Art