Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8

This is Llama 3.2 3B quantized to 2 bits per parameter using AQLM with OA-EM initialization and PV-tuning.

OA-EM (Output-Aware Expectation-Maximisation) is a Hessian-weighted EM algorithm that significantly improves codebook initialization for additive quantization, particularly at extreme compression rates. See the paper Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization for details.

Quantization Details

Parameter Value
Method AQLM + OA-EM init + PV-tuning
Bitrate 2 bpp (2 codebooks Γ— 8-bit, group size 8)
Codebook config num_codebooks=2, nbits_per_codebook=8, in_group_size=8
Beam size 4
OA-EM config 3 rounds, 100 Adam steps, lr=1e-4
PV-tuning 5 epochs, LAMB optimizer, lr=3e-4, 10K C4 samples
Seed 42
Context length 4096 tokens

Results

Perplexity (WikiText-2)

Model Pre-PV Post-PV
FP16 (unquantized) 7.28 β€”
AQLM Greedy (b=4) 352.39 12.66
AQLM OA-EM (b=4) 16.82 11.53

Downstream Tasks (post-PV-tuning, b=4)

Task Metric AQLM Greedy AQLM OA-EM
ARC-Challenge acc_norm ↑ .350 .359
ARC-Easy acc_norm ↑ .560 .614
HellaSwag acc_norm ↑ .620 .625
LAMBADA acc ↑ .577 .584
LAMBADA ppl ↓ 7.65 7.13
PIQA acc_norm ↑ .734 .734
WinoGrande acc ↑ .594 .619
Average .573 .589

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "kennedyian94/Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8",
    trust_remote_code=True,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("kennedyian94/Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8")

inputs = tokenizer("The meaning of life is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Note: Requires the aqlm inference library:

pip install aqlm[gpu]

Paper

Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization

Ian W. Kennedy and Nafise Sadat Moosavi, University of Sheffield

arXiv:2604.08118 | Code

Citation

@article{kennedy2026oaem,
  title={Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization},
  author={Kennedy, Ian W. and Moosavi, Nafise Sadat},
  journal={arXiv preprint arXiv:2604.08118},
  year={2026}
}
Downloads last month
903
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for kennedyian94/Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8

Quantized
(54)
this model

Paper for kennedyian94/Llama-3.2-3B-AQLM-OA-EM-2Bit-2x8