Papers
arxiv:2603.00578

Draft-Thinking: Learning Efficient Reasoning in Long Chain-of-Thought LLMs

Published on Feb 28
Authors:
,
,
,
,
,
,
,

Abstract

Draft-Thinking reduces reasoning costs in large reasoning models by teaching them to generate concise draft-style reasoning structures through progressive curriculum learning and adaptive prompting.

AI-generated summary

Long chain-of-thought~(CoT) has become a dominant paradigm for enhancing the reasoning capability of large reasoning models~(LRMs); however, the performance gains often come with a substantial increase in reasoning budget. Recent studies show that existing CoT paradigms tend to induce systematic overthinking, unnecessarily coupling reasoning capability with reasoning cost. Most prior approaches reduce token usage through post hoc techniques such as token compression, truncation, or length penalties, without explicitly addressing the core mechanisms of reasoning. We propose Draft-Thinking, which guides models to first learn a concise draft-style reasoning structure that retains only the critical reasoning steps. Through a progressive curriculum learning, the model stably internalizes this efficient reasoning pattern as its capability scales. Moreover, Draft-Thinking introduces adaptive prompting, which elevates reasoning depth to a flexible, model-selectable behavior. Extensive experiments demonstrate that Draft-Thinking substantially reduces reasoning budget while largely preserving reasoning performance; for example, on MATH500, it achieves an 82.6\% reduction in reasoning budget at the cost of only a 2.6\% performance drop.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.00578
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.00578 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.00578 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.00578 in a Space README.md to link it from this page.

Collections including this paper 1