gemma-2-9b-it — wanda pruning at 50% target sparsity
⚠️ Research artifact only — not for production use. This model was created to study fairness degradation under weight pruning. The companion paper (IEEE AIIoT 2026) demonstrates that wanda pruning at this sparsity level induces measurable bias amplification on the BBQ benchmark. Do not deploy this model in any user-facing or decision-making system.
Paper
Weight Pruning Amplifies Bias: A Multi-Method Study of Compressed LLMs for Edge AI Plawan Kumar Rath, Rahul Maliakkal. IEEE AIIoT 2026.
- arXiv: https://arxiv.org/abs/2605.08137
- Code: https://github.com/plawanrath/pruning-impact-analysis
- Base model: google/gemma-2-9b-it
- License:
gemma(inherited from base model — see terms)
Pruning configuration
- Method:
wanda - Target sparsity: 50%
- Actual sparsity achieved: 33.33%
- Zeroed parameters: 2,774,532,096 of 8,323,596,288 prunable (33.33%)
- Prune wall time: 335.9s
- Pruning scope: linear layers in transformer blocks (attention projections + MLP). Embeddings, LM head, and layer norms are untouched.
- Calibration set (Wanda only): 128 samples from C4, sequence length 2048.
Method description. Activation-aware unstructured pruning (Wanda, ICLR 2024). Importance score is |W_ij| * ||X_j||_2, computed from 128 C4 calibration samples at sequence length 2048. Reported by the paper as the most dangerous method from a fairness standpoint despite preserving perplexity best.
Reported metrics (from the paper)
| Metric | Value | Reference |
|---|---|---|
| Perplexity (Tulu-3 SFT mix, 256×512) | 13.10 | dense baseline 8.94 (+46.5%) |
| SRS (overall) | 0.065 | dense 0.065 (+0.0%) |
| SRS by category (s50) | Age: 0.193, Gender Identity: 0.016, Race/Ethnicity: 0.014, Religion: 0.088, SES: 0.015 | random-chance baseline ≈ 0.333 |
| New-bias-emergence rate | 1.30% | % of items with per-item SRS=0 at dense that develop SRS>0 after pruning, across 5 seeds (Table III in paper) |
| Mean per-item inference latency (Apple Silicon, MLX) | 0.455s | identical to the dense baseline — unstructured pruning provides no latency benefit on dense GEMM kernels (paper §V.B) |
Important caveats for IoT / edge deployment
- No storage savings. Unstructured pruning zeroes individual weights but keeps them in the dense float tensor. SafeTensors and GGUF do not exploit unstructured sparsity, so the on-disk size of this checkpoint is identical to the dense base model.
- No latency savings. Dense GEMM kernels do not skip zero entries. Inference latency on Apple Silicon (MLX) and the majority of consumer GPUs / mobile NPUs is identical to the dense baseline.
- Bias amplification may be invisible to perplexity-based eval. The paper's headline finding (the Smart Pruning Paradox): Wanda at 50% sparsity on Mistral-7B raises perplexity 3.5% but raises Stereotype Reliance Score 83.7% — a 24× disparity. Standard deployment validation based on perplexity alone provides false assurance.
Citation
@inproceedings{rath2026pruning,
title = {Weight Pruning Amplifies Bias: A Multi-Method Study of Compressed LLMs for Edge AI},
author = {Rath, Plawan Kumar and Maliakkal, Rahul},
booktitle = {Proc. IEEE AIIoT 2026},
year = {2026},
eprint = {2605.08137},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2605.08137}
}
Reproducibility
- All pruning scripts, evaluation pipelines, and aggregated results: https://github.com/plawanrath/pruning-impact-analysis
- BBQ benchmark (ambiguous condition only):
Elfsong/BBQ - Generated from
pruning_meta.jsonshipped in this repo (actual_sparsity, prune time, etc.).
- Downloads last month
- 17