Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8

This is Llama 3.1 8B quantized to 2 bits per parameter using AQLM with OA-EM initialization and PV-tuning.

OA-EM (Output-Aware Expectation-Maximisation) is a Hessian-weighted EM algorithm that significantly improves codebook initialization for additive quantization, particularly at extreme compression rates. See our paper for details.

Quantization Details

Parameter Value
Method AQLM + OA-EM init + PV-tuning
Bitrate 2 bpp (2 codebooks Γ— 8-bit, group size 8)
Codebook config num_codebooks=2, nbits_per_codebook=8, in_group_size=8
Beam size 8
OA-EM config 3 rounds, 100 Adam steps, lr=1e-4
PV-tuning 5 epochs, LAMB optimizer, lr=3e-4, 10K C4 samples
Seed 42
Context length 4096 tokens

Results

Perplexity

Model Pre-PV Wiki-2 Pre-PV C4 Post-PV Wiki-2 Post-PV C4
AQLM Greedy 18.86 15.01 9.39 12.02
AQLM OA-EM 16.38 14.50 9.25 11.89

Downstream Tasks (post-PV-tuning, b=8)

Task Metric AQLM Greedy AQLM OA-EM
ARC-Challenge acc_norm ↑ .432 .424
ARC-Easy acc_norm ↑ .656 .677
HellaSwag acc_norm ↑ .707 .714
LAMBADA acc ↑ .679 .675
LAMBADA ppl ↓ 4.60 4.59
PIQA acc_norm ↑ .759 .769
WinoGrande acc ↑ .661 .679
Average .649 .656

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "kennedyian94/Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8",
    trust_remote_code=True,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("kennedyian94/Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8")

inputs = tokenizer("The meaning of life is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Note: Requires the aqlm inference library:

pip install aqlm[gpu]

Paper

Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization

Ian W. Kennedy and Nafise Sadat Moosavi, University of Sheffield

arXiv:2604.08118 | Code

Citation

@article{kennedy2026oaem,
  title={Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization},
  author={Kennedy, Ian W. and Moosavi, Nafise Sadat},
  journal={arXiv preprint arXiv:2604.08118},
  year={2026}
}
Downloads last month
895
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for kennedyian94/Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8

Quantized
(320)
this model

Paper for kennedyian94/Llama-3.1-8B-AQLM-OA-EM-2Bit-2x8