wpferrell commited on
Commit
b4cd8de
·
verified ·
1 Parent(s): 06f221d

Add model card

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - bigsmall
5
+ - compression
6
+ - lossless
7
+ - qwen2
8
+ ---
9
+
10
+ # Qwen 2.5 7B Instruct (BigSmall compressed)
11
+
12
+ **15.2 GB → 10.1 GB (66.0%). Under 2 GB peak RAM. Full quality — not quantization.**
13
+
14
+ This is Qwen2.5-7B-Instruct compressed with [BigSmall](https://github.com/wpferrell/Bigsmall) — lossless neural network weight compression. Every weight is bit-identical to the original. No accuracy loss whatsoever.
15
+
16
+ ## Install
17
+
18
+ ```bash
19
+ pip install bigsmall
20
+ ```
21
+
22
+ ## Load and run inference (streaming — under 2GB peak RAM)
23
+
24
+ ```python
25
+ from bigsmall import StreamingLoader
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+
28
+ loader = StreamingLoader("wpferrell/qwen2.5-7b-instruct-bigsmall")
29
+ model = loader.load_model(AutoModelForCausalLM)
30
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
31
+
32
+ messages = [{"role": "user", "content": "Explain lossless compression."}]
33
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
34
+ inputs = tokenizer([text], return_tensors="pt")
35
+ outputs = model.generate(**inputs, max_new_tokens=200)
36
+ print(tokenizer.decode(outputs[0]))
37
+ ```
38
+
39
+ ## Or use AutoModel with the transparent hook
40
+
41
+ ```python
42
+ import bigsmall
43
+ bigsmall.install_hook()
44
+
45
+ from transformers import AutoModelForCausalLM
46
+ model = AutoModelForCausalLM.from_pretrained("wpferrell/qwen2.5-7b-instruct-bigsmall")
47
+ ```
48
+
49
+ ## Compression stats
50
+
51
+ | Metric | Value |
52
+ |--------|-------|
53
+ | Original size | 15.2 GB |
54
+ | Compressed size | 10.1 GB |
55
+ | Ratio | 66.0% (BF16) |
56
+ | Format | BF16 → BigSmall (.bs shards) |
57
+ | Lossless verified | md5 every tensor |
58
+ | Peak RAM (streaming) | < 2 GB |
59
+
60
+ ## Comparison
61
+
62
+ | Tool | BF16 Ratio | FP32 Ratio | Inference Overhead | Hardware |
63
+ |------|------------|------------|-------------------|---------|
64
+ | [ZipNN](https://arxiv.org/abs/2411.05239) | 67% | 83% | None | CPU |
65
+ | [DFloat11](https://arxiv.org/abs/2504.11651) | ~70% | BF16 only | ~2x at batch=1 | CUDA only |
66
+ | [ZipServ](https://arxiv.org/abs/2603.17435) | ~70% | BF16 only | 1.22x faster | GDDR GPU |
67
+ | **BigSmall** | **65.6%** | **75.5%** | **None** | **CPU + any GPU** |
68
+
69
+ ## About BigSmall
70
+
71
+ BigSmall compresses at the joint entropy floor for neural network weights. It codes sign+exponent jointly and mantissa conditioned on exponent, achieving the information-theoretic minimum. The streaming loader decompresses one transformer layer at a time directly into VRAM.
72
+
73
+ - GitHub: [wpferrell/Bigsmall](https://github.com/wpferrell/Bigsmall)
74
+ - PyPI: `pip install bigsmall`
75
+ - Paper: [BigSmall: Lossless Neural Network Weight Compression at the Joint Entropy Floor](https://github.com/wpferrell/Bigsmall/blob/main/paper.pdf)