Attention Sink in Transformers: A Survey on Utilization, Interpretation, and Mitigation Paper • 2604.10098 • Published 3 days ago • 27
Locate, Steer, and Improve: A Practical Survey of Actionable Mechanistic Interpretability in Large Language Models Paper • 2601.14004 • Published Jan 20 • 47
RotateKV: Accurate and Robust 2-Bit KV Cache Quantization for LLMs via Outlier-Aware Adaptive Rotations Paper • 2501.16383 • Published Jan 25, 2025
AKVQ-VL: Attention-Aware KV Cache Adaptive 2-Bit Quantization for Vision-Language Models Paper • 2501.15021 • Published Jan 25, 2025
Unveiling Super Experts in Mixture-of-Experts Large Language Models Paper • 2507.23279 • Published Jul 31, 2025 • 1