gpt-oss-20b-speculator.eagle3

Model Overview

  • Verifier: openai/gpt-oss-20b
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 04/02/2026
  • Version: 3.0
  • Model Developers: RedHat

This is a speculator model designed for use with openai/gpt-oss-20b, based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on hidden states distilled from openai/gpt-oss-120b, using the inference-optimization/gpt-oss-120b dataset (a combination of Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered and the train_sft split of HuggingFaceH4/ultrachat_200k). This model should be used with the openai/gpt-oss-20b chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve openai/gpt-oss-20b \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/gpt-oss-20b-speculator.eagle3",
    "num_speculative_tokens": 3,
    "method": "eagle3"
  }'

Evaluations

Use cases

Use Case Dataset Number of Samples
Coding HumanEval 168
Math Reasoning math_reasoning 80
QA qa 80
Question Answering question 80
RAG rag 80
Summarization summarization 80
Translation translation 80

Acceptance lengths

Use Case k=1 k=2 k=3 k=4 k=5
Coding 1.65 2.06 2.29 2.42 2.52
Math Reasoning 1.69 2.15 2.43 2.64 2.75
QA 1.50 1.74 1.88 1.91 1.93
Question Answering 1.58 1.90 2.07 2.19 2.25
RAG 1.53 1.82 1.95 2.04 2.08
Summarization 1.56 1.89 2.01 2.13 2.18
Translation 1.62 2.00 2.22 2.37 2.44
Details Configuration
  • temperature: default (greedy)
  • repetitions: 1
  • time per experiment: 5min
  • hardware: 1xA100
  • vLLM version: 0.17.1
  • GuideLLM version: 0.5.5

Command

GUIDELLM__MAX_CONCURRENCY=128 \
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --rate-type throughput \
  --max-seconds 300 \
  --output-path "gpt-oss-20b-HumanEval.json"

GuideLLM interface changed, so for compatibility with the latest version (v0.6.0), please use the following command:

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --profile sweep \
  --max-seconds 1800 \
  --output-path "my_output.json" \
  --backend-args '{"extras": {"body": {"temperature":0.6, "top_p":0.95, "top_k":20}}}'
Downloads last month
19,019
Safetensors
Model size
0.9B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RedHatAI/gpt-oss-20b-speculator.eagle3

Paper for RedHatAI/gpt-oss-20b-speculator.eagle3