Speculator Models
Collection
18 items • Updated • 15
This is a speculator model designed for use with openai/gpt-oss-20b, based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on hidden states distilled from openai/gpt-oss-120b, using the inference-optimization/gpt-oss-120b dataset (a combination of Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered and the train_sft split of HuggingFaceH4/ultrachat_200k).
This model should be used with the openai/gpt-oss-20b chat template, specifically through the /chat/completions endpoint.
vllm serve openai/gpt-oss-20b \
-tp 1 \
--speculative-config '{
"model": "RedHatAI/gpt-oss-20b-speculator.eagle3",
"num_speculative_tokens": 3,
"method": "eagle3"
}'
| Use Case | Dataset | Number of Samples |
|---|---|---|
| Coding | HumanEval | 168 |
| Math Reasoning | math_reasoning | 80 |
| QA | qa | 80 |
| Question Answering | question | 80 |
| RAG | rag | 80 |
| Summarization | summarization | 80 |
| Translation | translation | 80 |
| Use Case | k=1 | k=2 | k=3 | k=4 | k=5 |
|---|---|---|---|---|---|
| Coding | 1.65 | 2.06 | 2.29 | 2.42 | 2.52 |
| Math Reasoning | 1.69 | 2.15 | 2.43 | 2.64 | 2.75 |
| QA | 1.50 | 1.74 | 1.88 | 1.91 | 1.93 |
| Question Answering | 1.58 | 1.90 | 2.07 | 2.19 | 2.25 |
| RAG | 1.53 | 1.82 | 1.95 | 2.04 | 2.08 |
| Summarization | 1.56 | 1.89 | 2.01 | 2.13 | 2.18 |
| Translation | 1.62 | 2.00 | 2.22 | 2.37 | 2.44 |
Command
GUIDELLM__MAX_CONCURRENCY=128 \
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/speculator_benchmarks" \
--data-args '{"data_files": "HumanEval.jsonl"}' \
--rate-type throughput \
--max-seconds 300 \
--output-path "gpt-oss-20b-HumanEval.json"
GuideLLM interface changed, so for compatibility with the latest version (v0.6.0), please use the following command:
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/speculator_benchmarks" \
--data-args '{"data_files": "HumanEval.jsonl"}' \
--profile sweep \
--max-seconds 1800 \
--output-path "my_output.json" \
--backend-args '{"extras": {"body": {"temperature":0.6, "top_p":0.95, "top_k":20}}}'