Qwen-2.5-3B-AQLM-OA-EM-2Bit-2x8

This is Qwen 2.5 3B quantized to 2 bits per parameter using AQLM with OA-EM initialization and PV-tuning.

OA-EM (Output-Aware Expectation-Maximisation) is a Hessian-weighted EM algorithm that significantly improves codebook initialization for additive quantization, particularly at extreme compression rates. See our paper for details.

Quantization Details

Parameter Value
Method AQLM + OA-EM init + PV-tuning
Bitrate 2 bpp (2 codebooks Γ— 8-bit, group size 8)
Codebook config num_codebooks=2, nbits_per_codebook=8, in_group_size=8
Beam size 8
OA-EM config 3 rounds, 100 Adam steps, lr=1e-4
PV-tuning 5 epochs, LAMB optimizer, lr=3e-4, 10K C4 samples
Seed 42
Context length 4096 tokens

Results

Perplexity

Model Pre-PV Wiki-2 Pre-PV C4 Post-PV Wiki-2 Post-PV C4
AQLM Greedy 12.50 16.01 10.93 14.57
AQLM OA-EM 12.30 16.08 10.73 14.49

Downstream Tasks (post-PV-tuning, b=8)

Task Metric AQLM Greedy AQLM OA-EM
ARC-Challenge acc_norm ↑ .375 .366
ARC-Easy acc_norm ↑ .662 .651
HellaSwag acc_norm ↑ .626 .630
LAMBADA acc ↑ .600 .587
LAMBADA ppl ↓ 7.29 7.37
PIQA acc_norm ↑ .739 .737
WinoGrande acc ↑ .634 .647
Average .606 .603

Note: On Qwen, the baseline holds a small downstream advantage (0.606 vs 0.603 average accuracy), consistent with the mild initialisation bottleneck on this architecture. OA-EM wins on perplexity (both WikiText-2 and C4 after PV-tuning).

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "kennedyian94/Qwen-2.5-3B-AQLM-OA-EM-2Bit-2x8",
    trust_remote_code=True,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("kennedyian94/Qwen-2.5-3B-AQLM-OA-EM-2Bit-2x8")

inputs = tokenizer("The meaning of life is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Note: Requires the aqlm inference library:

pip install aqlm[gpu]

Paper

Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization

Ian W. Kennedy and Nafise Sadat Moosavi, University of Sheffield

arXiv:2604.08118 | Code

Citation

@article{kennedy2026oaem,
  title={Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization},
  author={Kennedy, Ian W. and Moosavi, Nafise Sadat},
  journal={arXiv preprint arXiv:2604.08118},
  year={2026}
}
Downloads last month
592
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for kennedyian94/Qwen-2.5-3B-AQLM-OA-EM-2Bit-2x8

Base model

Qwen/Qwen2.5-3B
Quantized
(44)
this model

Paper for kennedyian94/Qwen-2.5-3B-AQLM-OA-EM-2Bit-2x8