Papers
arxiv:2604.04987

Cactus: Accelerating Auto-Regressive Decoding with Constrained Acceptance Speculative Sampling

Published on Apr 5
· Submitted by
Yongchang Hao
on Apr 13
Authors:
,

Abstract

Speculative sampling methods are enhanced by formulating them as constrained optimization problems, enabling controlled distribution divergence while maintaining high acceptance rates and output quality.

AI-generated summary

Speculative sampling (SpS) has been successful in accelerating the decoding throughput of auto-regressive large language models by leveraging smaller draft models. SpS strictly enforces the generated distribution to match that of the verifier LLM. This is unnecessarily restrictive as slight variations of the verifier's distribution, such as sampling with top-k or temperature, would also be acceptable. Typical acceptance sampling (TAS) alleviates this issue by accepting more tokens using entropy-based heuristics. However, this approach distorts the verifier distribution, potentially degrading output quality when the verifier encodes critical information. In this work, we formalize the speculative sampling algorithm through the lens of constrained optimization. Based on this formulation, we propose Cactus (constrained acceptance speculative sampling), a method that guarantees controlled divergence from the verifier distribution and increasing acceptance rates. Empirical results across a wide range of benchmarks confirm the effectiveness of our approach.

Community

Cactus is a speculative decoding method that provably increases token acceptance rates while bounding the divergence from the target distribution. It works as a drop-in replacement for the rejection sampler in vLLM.

overview

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.04987
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.04987 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.04987 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.04987 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.