QThink-Qwen3-1.7B-AIME2025

QThink: Parallel Latent Reasoning via Per-Step Distillation of Multiple Rollouts

Trained on DAPO-Math-17K (14.1K problems) for competition-level math (AIME 2025).

Caveat: This is a preliminary model trained with short-context data (768 token budget). AIME problems require long reasoning chains (16K+ tokens). A full-length version (R18) is in progress. Results below reflect the short-context constraint, not the method's ceiling.

Results (AIME 2025, 30 problems, 10 runs, temperature=0.6)

Model Mean ± SE
CODI (paper method) 8.7% ± 1.3%
QThink (ours) 8.3% ± 1.1%
SFT 7.3% ± 1.0%
Base Qwen3-1.7B 6.3% ± 1.3%

All methods trained with matched 768-token budget for fair comparison. Differences are within standard error on this small test set.

For reference, the official Qwen3-1.7B in thinking mode achieves 65.6% on AIME 2025 using 32K+ generation tokens (Qwen3 Technical Report, arXiv 2505.09388).

Cross-Benchmark Results (QThink best)

Benchmark QThink SFT Base CODI
GSM8k 83.2% 80.7% 77.3% 78.2%
MATH-500 43.6% 38.2% 33.6% 31.2%
Tooluse 48.5% 45.6% 47.1% 42.6%
AIME 2025 8.3% 7.3% 6.3% 8.7%

Training Config

  • Base model: Qwen/Qwen3-1.7B with LoRA (rank=32, alpha=16)
  • Mode: uniform multi-rollout per-step distillation (gamma=2.0, K=6 latent steps)
  • Training data: DAPO-Math-17K English (14.1K problems, 16 rollouts each)
  • Student budget: max_prompt=512, max_answer=256 (768 total)
  • Teacher states: precomputed from full rollouts (up to 4096 tokens)
Downloads last month
5
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LakshyAAAgrawal/QThink-Qwen3-1.7B-AIME2025

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(622)
this model