ZipServ: Fast and Memory-Efficient LLM Inference with Hardware-Aware Lossless Compression
Paper • 2603.17435 • Published
548MB → 414MB (75.5%). Bit-identical. Under 500MB peak RAM with streaming.
This is GPT-2 117M compressed with BigSmall — lossless neural network weight compression. Not quantization. Every weight is bit-identical to the original.
pip install bigsmall
from bigsmall import StreamingLoader
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Streams one layer at a time — under 500MB peak RAM
loader = StreamingLoader("wpferrell/gpt2-bigsmall")
model = loader.load_model(GPT2LMHeadModel)
tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
inputs = tokenizer("Hello, I'm a language model", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
from bigsmall import from_pretrained
from transformers import GPT2LMHeadModel
model = from_pretrained("wpferrell/gpt2-bigsmall", model_class=GPT2LMHeadModel)
| File | Original | Compressed | Ratio |
|---|---|---|---|
| model.safetensors (FP32) | 548 MB | 414 MB | 75.5% |
Verified lossless: md5 of every weight tensor matches original after decompression.
| Tool | BF16 Ratio | FP32 Ratio | Inference Overhead | Hardware |
|---|---|---|---|---|
| ZipNN | 67% | 83% | None | CPU |
| DFloat11 | ~70% | BF16 only | ~2x at batch=1 | CUDA only |
| ZipServ | ~70% | BF16 only | 1.22x faster | GDDR GPU |
| BigSmall | 65.6% | 75.5% | None | CPU + any GPU |
Lower ratio = better compression. BigSmall BF16 measured on Mistral 7B.
BigSmall compresses at the joint entropy floor for neural network weights. It codes sign+exponent jointly and mantissa conditioned on exponent, achieving the information-theoretic minimum. The streaming loader decompresses one transformer layer at a time directly into VRAM.
pip install bigsmall