plawanrath's picture
docs: add arXiv 2605.08137 for citation
03f8ff0 verified
metadata
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
library_name: transformers
language:
  - en
tags:
  - pruning
  - random
  - bias-evaluation
  - llm-compression
  - arxiv:2605.08137
  - research-only

phi-3.5-mini-instruct — random pruning at 10% target sparsity

⚠️ Research artifact only — not for production use. This model was created to study fairness degradation under weight pruning. The companion paper (IEEE AIIoT 2026) demonstrates that random pruning at this sparsity level induces measurable bias amplification on the BBQ benchmark. Do not deploy this model in any user-facing or decision-making system.

Paper

Weight Pruning Amplifies Bias: A Multi-Method Study of Compressed LLMs for Edge AI Plawan Kumar Rath, Rahul Maliakkal. IEEE AIIoT 2026.

Pruning configuration

  • Method: random
  • Target sparsity: 10%
  • Actual sparsity achieved: 10.00%
  • Zeroed parameters: 362,393,298 of 3,623,878,656 prunable (10.00%)
  • Prune wall time: 0.4s
  • Pruning scope: linear layers in transformer blocks (attention projections + MLP). Embeddings, LM head, and layer norms are untouched.
  • Calibration set (Wanda only): 128 samples from C4, sequence length 2048.

Method description. Uniform random unstructured pruning. Acts as a control to test whether observed effects come from the selection criterion or from sparsity itself.

Reported metrics (from the paper)

Metric Value Reference
Mean per-item inference latency (Apple Silicon, MLX) 0.158s identical to the dense baseline — unstructured pruning provides no latency benefit on dense GEMM kernels (paper §V.B)

Important caveats for IoT / edge deployment

  • No storage savings. Unstructured pruning zeroes individual weights but keeps them in the dense float tensor. SafeTensors and GGUF do not exploit unstructured sparsity, so the on-disk size of this checkpoint is identical to the dense base model.
  • No latency savings. Dense GEMM kernels do not skip zero entries. Inference latency on Apple Silicon (MLX) and the majority of consumer GPUs / mobile NPUs is identical to the dense baseline.
  • Bias amplification may be invisible to perplexity-based eval. The paper's headline finding (the Smart Pruning Paradox): Wanda at 50% sparsity on Mistral-7B raises perplexity 3.5% but raises Stereotype Reliance Score 83.7% — a 24× disparity. Standard deployment validation based on perplexity alone provides false assurance.

Citation

@inproceedings{rath2026pruning,
  title         = {Weight Pruning Amplifies Bias: A Multi-Method Study of Compressed LLMs for Edge AI},
  author        = {Rath, Plawan Kumar and Maliakkal, Rahul},
  booktitle     = {Proc. IEEE AIIoT 2026},
  year          = {2026},
  eprint        = {2605.08137},
  archivePrefix = {arXiv},
  primaryClass  = {cs.LG},
  url           = {https://arxiv.org/abs/2605.08137}
}

Reproducibility