Speculator Models
Collection
18 items • Updated • 17
This is a speculator model designed for use with Qwen3-30B-A3B-Instruct-2507 , based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Pro-300K-Filtered and the HuggingFaceH4/ultrachat_200k datasets.
The model was trained with thinking turned disabled.
This model should be used with the Qwen3-30B-A3B-Instruct-2507 chat template, specifically through the /chat/completions endpoint.
vllm serve Qwen3-30B-A3B-Instruct-2507 \
-tp 1 \
--speculative-config '{
"model": "RedHatAI/Qwen3-30B-A3B-Instruct-2507-speculator.eagle3",
"num_speculative_tokens": 3,
"method": "eagle3"
}'
| Use Case | Dataset | Number of Samples |
|---|---|---|
| Coding | HumanEval | 168 |
| Math Reasoning | gsm8k | 80 |
| Text Summarization | CNN/Daily Mail | 80 |
| Use Case | k=1 | k=2 | k=3 | k=4 | k=5 |
|---|---|---|---|---|---|
| Coding | 1.81 | 2.47 | 2.85 | 3.13 | 3.50 |
| Math Reasoning | 1.87 | 2.53 | 3.09 | 3.57 | 3.79 |
| Text Summarization | 1.58 | 1.90 | 2.08 | 2.16 | 2.24 |