Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization
Paper β’ 2604.08118 β’ Published β’ 2
This is Llama 3.1 8B quantized to 2 bits per parameter using AQLM with OA-EM initialization and PV-tuning.
OA-EM (Output-Aware Expectation-Maximisation) is a Hessian-weighted EM algorithm that significantly improves codebook initialization for additive quantization, particularly at extreme compression rates. See our paper for details.
| Parameter | Value |
|---|---|
| Method | AQLM + OA-EM init + PV-tuning |
| Bitrate | 2 bpp (2 codebooks Γ 8-bit, group size 8) |
| Codebook config | num_codebooks=2, nbits_per_codebook=8, in_group_size=8 |
| Beam size | 8 |
| OA-EM config | 3 rounds, 100 Adam steps, lr=1e-4 |
| PV-tuning | 5 epochs, LAMB optimizer, lr=3e-4, 10K C4 samples |
| Seed | 42 |
| Context length | 4096 tokens |
| Model | Pre-PV Wiki-2 | Pre-PV C4 | Post-PV Wiki-2 | Post-PV C4 |
|---|---|---|---|---|
| AQLM Greedy | 18.86 | 15.01 | 9.39 | 12.02 |
| AQLM OA-EM | 16.38 | 14.50 | 9.25 | 11.89 |
| Task | Metric | AQLM Greedy | AQLM OA-EM |
|---|---|---|---|
| ARC-Challenge | acc_norm β | .432 | .424 |
| ARC-Easy | acc_norm β | .656 | .677 |
| HellaSwag | acc_norm β | .707 | .714 |
| LAMBADA | acc β | .679 | .675 |
| LAMBADA | ppl β | 4.60 | 4.59 |
| PIQA | acc_norm β | .759 | .769 |
| WinoGrande | acc β | .661 | .679 |
| Average | .649 | .656 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"kennedyian94/Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8",
trust_remote_code=True,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("kennedyian94/Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8")
inputs = tokenizer("The meaning of life is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note: Requires the aqlm inference library:
pip install aqlm[gpu]
Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization
Ian W. Kennedy and Nafise Sadat Moosavi, University of Sheffield
@article{kennedy2026oaem,
title={Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization},
author={Kennedy, Ian W. and Moosavi, Nafise Sadat},
journal={arXiv preprint arXiv:2604.08118},
year={2026}
}
Base model
meta-llama/Llama-3.1-8B