Qwen 2.5 14B Instruct (BigSmall compressed)

29.5 GB β†’ 19.5 GB (66.1%). Under 2 GB peak RAM. Full quality β€” not quantization.

This is Qwen2.5-14B-Instruct compressed with BigSmall β€” lossless neural network weight compression. Every weight is bit-identical to the original. No accuracy loss whatsoever.

Install

pip install bigsmall

Load and run inference (streaming β€” under 2GB peak RAM)

from bigsmall import StreamingLoader
from transformers import AutoModelForCausalLM, AutoTokenizer

loader = StreamingLoader("wpferrell/qwen2.5-14b-instruct-bigsmall")
model = loader.load_model(AutoModelForCausalLM)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-14B-Instruct")

messages = [{"role": "user", "content": "Explain lossless compression."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))

Or use AutoModel with the transparent hook

import bigsmall
bigsmall.install_hook()

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("wpferrell/qwen2.5-14b-instruct-bigsmall")

Compression stats

Metric Value
Original size 29.5 GB
Compressed size 19.5 GB
Ratio 66.1% (BF16)
Format BF16 β†’ BigSmall (.bs shards)
Lossless verified md5 every tensor
Peak RAM (streaming) < 2 GB

Comparison

Tool BF16 Ratio FP32 Ratio Inference Overhead Hardware
ZipNN 67% 83% None CPU
DFloat11 ~70% BF16 only ~2x at batch=1 CUDA only
ZipServ ~70% BF16 only 1.22x faster GDDR GPU
BigSmall 65.6% 75.5% None CPU + any GPU

About BigSmall

BigSmall compresses at the joint entropy floor for neural network weights. It codes sign+exponent jointly and mantissa conditioned on exponent, achieving the information-theoretic minimum. The streaming loader decompresses one transformer layer at a time directly into VRAM.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Papers for wpferrell/qwen2.5-14b-instruct-bigsmall