Qwen3-8B-speculator.eagle3

Model Overview

  • Verifier: Qwen3-30B-A3B-Instruct-2507
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 12/12/2025
  • Version: 1.0
  • Model Developers: RedHat

This is a speculator model designed for use with Qwen3-30B-A3B-Instruct-2507 , based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Pro-300K-Filtered and the HuggingFaceH4/ultrachat_200k datasets. The model was trained with thinking turned disabled. This model should be used with the Qwen3-30B-A3B-Instruct-2507 chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve Qwen3-30B-A3B-Instruct-2507 \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/Qwen3-30B-A3B-Instruct-2507-speculator.eagle3",
    "num_speculative_tokens": 3,
    "method": "eagle3"
  }'

Evaluations

Use cases

Use Case Dataset Number of Samples
Coding HumanEval 168
Math Reasoning gsm8k 80
Text Summarization CNN/Daily Mail 80

Acceptance lengths

Use Case k=1 k=2 k=3 k=4 k=5
Coding 1.81 2.47 2.85 3.13 3.50
Math Reasoning 1.87 2.53 3.09 3.57 3.79
Text Summarization 1.58 1.90 2.08 2.16 2.24
Details Configuration
Downloads last month
846
Safetensors
Model size
0.5B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RedHatAI/Qwen3-30B-A3B-Instruct-2507-speculator.eagle3

Paper for RedHatAI/Qwen3-30B-A3B-Instruct-2507-speculator.eagle3