Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization
Paper β’ 2604.08118 β’ Published β’ 2
This is Llama 3.2 3B quantized to 2 bits per parameter using AQLM with OA-EM initialization and PV-tuning.
OA-EM (Output-Aware Expectation-Maximisation) is a Hessian-weighted EM algorithm that significantly improves codebook initialization for additive quantization, particularly at extreme compression rates. See the paper Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization for details.
| Parameter | Value |
|---|---|
| Method | AQLM + OA-EM init + PV-tuning |
| Bitrate | 2 bpp (2 codebooks Γ 8-bit, group size 8) |
| Codebook config | num_codebooks=2, nbits_per_codebook=8, in_group_size=8 |
| Beam size | 4 |
| OA-EM config | 3 rounds, 100 Adam steps, lr=1e-4 |
| PV-tuning | 5 epochs, LAMB optimizer, lr=3e-4, 10K C4 samples |
| Seed | 42 |
| Context length | 4096 tokens |
| Model | Pre-PV | Post-PV |
|---|---|---|
| FP16 (unquantized) | 7.28 | β |
| AQLM Greedy (b=4) | 352.39 | 12.66 |
| AQLM OA-EM (b=4) | 16.82 | 11.53 |
| Task | Metric | AQLM Greedy | AQLM OA-EM |
|---|---|---|---|
| ARC-Challenge | acc_norm β | .350 | .359 |
| ARC-Easy | acc_norm β | .560 | .614 |
| HellaSwag | acc_norm β | .620 | .625 |
| LAMBADA | acc β | .577 | .584 |
| LAMBADA | ppl β | 7.65 | 7.13 |
| PIQA | acc_norm β | .734 | .734 |
| WinoGrande | acc β | .594 | .619 |
| Average | .573 | .589 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"kennedyian94/Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8",
trust_remote_code=True,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("kennedyian94/Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8")
inputs = tokenizer("The meaning of life is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note: Requires the aqlm inference library:
pip install aqlm[gpu]
Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization
Ian W. Kennedy and Nafise Sadat Moosavi, University of Sheffield
@article{kennedy2026oaem,
title={Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization},
author={Kennedy, Ian W. and Moosavi, Nafise Sadat},
journal={arXiv preprint arXiv:2604.08118},
year={2026}
}