ZipServ: Fast and Memory-Efficient LLM Inference with Hardware-Aware Lossless Compression
Paper β’ 2603.17435 β’ Published
29.5 GB β 19.5 GB (66.1%). Under 2 GB peak RAM. Full quality β not quantization.
This is Qwen2.5-14B-Instruct compressed with BigSmall β lossless neural network weight compression. Every weight is bit-identical to the original. No accuracy loss whatsoever.
pip install bigsmall
from bigsmall import StreamingLoader
from transformers import AutoModelForCausalLM, AutoTokenizer
loader = StreamingLoader("wpferrell/qwen2.5-14b-instruct-bigsmall")
model = loader.load_model(AutoModelForCausalLM)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-14B-Instruct")
messages = [{"role": "user", "content": "Explain lossless compression."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
import bigsmall
bigsmall.install_hook()
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("wpferrell/qwen2.5-14b-instruct-bigsmall")
| Metric | Value |
|---|---|
| Original size | 29.5 GB |
| Compressed size | 19.5 GB |
| Ratio | 66.1% (BF16) |
| Format | BF16 β BigSmall (.bs shards) |
| Lossless verified | md5 every tensor |
| Peak RAM (streaming) | < 2 GB |
| Tool | BF16 Ratio | FP32 Ratio | Inference Overhead | Hardware |
|---|---|---|---|---|
| ZipNN | 67% | 83% | None | CPU |
| DFloat11 | ~70% | BF16 only | ~2x at batch=1 | CUDA only |
| ZipServ | ~70% | BF16 only | 1.22x faster | GDDR GPU |
| BigSmall | 65.6% | 75.5% | None | CPU + any GPU |
BigSmall compresses at the joint entropy floor for neural network weights. It codes sign+exponent jointly and mantissa conditioned on exponent, achieving the information-theoretic minimum. The streaming loader decompresses one transformer layer at a time directly into VRAM.
pip install bigsmall