Qwen3-30B-A3B-speculator.eagle3
Model Overview
- Verifier: Qwen3-30B-A3B
- Speculative Decoding Algorithm: EAGLE-3
- Model Architecture: Eagle3Speculator
- Release Date: 3/12/2026
- Version: 1.0
- Model Developers: RedHat
This is a speculator model designed for use with Qwen3-30B-A3B with reasoning enabled. If interested in non-reasoning use cases, please use RedHatAI/Qwen3-30B-A3B-Instruct-2507-speculator.eagle3.
This model is based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Pro-300K-Filtered and the HuggingFaceH4/ultrachat_200k datasets. The model was trained with thinking enabled.
This model should be used with the Qwen3-30B-A3B chat template, specifically through the /chat/completions endpoint.
Use with vLLM
vllm serve Qwen3-30B-A3B \
-tp 1 \
--speculative-config '{
"model": "RedHatAI/Qwen3-30B-A3B-speculator.eagle3",
"num_speculative_tokens": 5,
"method": "eagle3"
}'
Evaluations
Model / run: Qwen3-30B-A3B-speculator.eagle3
vLLM: 0.15.0
Training data: Magpie + UltraChat; responses from the Qwen/Qwen3-235B-A22B model (with reasoning enabled).
Acceptance lengths (draft length)
| Dataset | k=1 | k=2 | k=3 | k=4 | k=5 |
|---|---|---|---|---|---|
| HumanEval | 1.81 | 2.44 | 2.90 | 3.21 | 3.44 |
| math_reasoning | 1.84 | 2.50 | 3.02 | 3.41 | 3.70 |
| qa | 1.69 | 2.15 | 2.44 | 2.61 | 2.72 |
| question | 1.76 | 2.32 | 2.71 | 2.93 | 3.09 |
| rag | 1.74 | 2.25 | 2.60 | 2.82 | 2.97 |
| summarization | 1.66 | 2.05 | 2.30 | 2.43 | 2.51 |
| translation | 1.72 | 2.21 | 2.53 | 2.74 | 2.87 |
Details
Configuration
- Model: Qwen3-30B-A3B
- Data: Magpie + UltraChat — responses from Qwen3-30B-A3B model (reasoning)
- temperature: 0.0
- vllm: 0.15.0
- backend: vLLM chat_completions
- rate-type: throughput
- max-seconds per run: 300
- hardware: 8× GPU (tensor parallel 8)
- Benchmark data: RedHatAI/speculator_benchmarks
- vLLM serve: --no-enable-prefix-caching, --max-num-seqs 64, --enforce-eager
Command
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
GUIDELLM__MAX_CONCURRENCY=128 \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/speculator_benchmarks" \
--data-args '{"data_files": "HumanEval.jsonl"}' \
--rate-type throughput \
--max-seconds 300
- Downloads last month
- 648