Qwen3-30B-A3B-Thinking-2507-speculator.eagle3

Model Overview

  • Verifier: Qwen3-30B-A3B
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 3/12/2026
  • Version: 1.0
  • Model Developers: RedHat

This model is a copy of RedHatAI/Qwen3-30B-A3B-speculator.eagle3. It can be used with Qwen/Qwen3-30B-A3B-Thinking-2507 as well.

This model is based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Pro-300K-Filtered and the HuggingFaceH4/ultrachat_200k datasets. The model was trained with thinking enabled. This model should be used with the Qwen3-30B-A3B chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve Qwen3-30B-A3B-Thinking-2507 \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/Qwen3-30B-A3B-Thinking-2507-speculator.eagle3",
    "num_speculative_tokens": 5,
    "method": "eagle3"
  }'

Evaluations

Model / run: Qwen3-30B-A3B-Thinking-2507-speculator.eagle3 (CKPT 5)
vLLM: 0.15.0
Training data: Magpie + UltraChat; responses from the Qwen/Qwen3-235B-A22B model (with reasoning enabled).

Acceptance lengths (draft length)

Dataset k=1 k=2 k=3 k=4 k=5
HumanEval 1.81 2.44 2.90 3.21 3.44
math_reasoning 1.84 2.50 3.02 3.41 3.70
qa 1.69 2.15 2.44 2.61 2.72
question 1.76 2.32 2.71 2.93 3.09
rag 1.74 2.25 2.60 2.82 2.97
summarization 1.66 2.05 2.30 2.43 2.51
translation 1.72 2.21 2.53 2.74 2.87
Details

Configuration

  • Model: Qwen3-30B-A3B-Thinking-2507
  • Data: Magpie + UltraChat — responses from Qwen3-30B-A3B model (reasoning)
  • temperature: 0.0
  • vllm: 0.15.0
  • backend: vLLM chat_completions
  • rate-type: throughput
  • max-seconds per run: 300
  • hardware: 8× GPU (tensor parallel 8)
  • Benchmark data: RedHatAI/speculator_benchmarks
  • vLLM serve: --no-enable-prefix-caching, --max-num-seqs 64, --enforce-eager

Command

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
GUIDELLM__MAX_CONCURRENCY=128 \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --rate-type throughput \
  --max-seconds 300

GuideLLM interface changed, so for compatibility with the latest version (v0.6.0), please use the following command:

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --profile sweep \
  --max-seconds 1800 \
  --output-path "my_output.json" \
  --backend-args '{"extras": {"body": {"temperature":0.6, "top_p":0.95, "top_k":20}}}'
Downloads last month
1,044
Safetensors
Model size
0.5B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RedHatAI/Qwen3-30B-A3B-Thinking-2507-speculator.eagle3

Paper for RedHatAI/Qwen3-30B-A3B-Thinking-2507-speculator.eagle3