What Matters in Transformers? Not All Attention is Needed Paper • 2406.15786 • Published Jun 22, 2024 • 33
Making Large Language Models Efficient Dense Retrievers Paper • 2512.20612 • Published Dec 23, 2025 • 4
LLM-Drop Collection Model weights of paper "Uncovering the Redundancy in Transformers via a Unified Study of Layer Dropping (TMLR)". • 18 items • Updated 1 day ago • 6
Understanding and Harnessing Sparsity in Unified Multimodal Models Paper • 2512.02351 • Published Dec 2, 2025 • 4
Demystifying When Pruning Works via Representation Hierarchies Paper • 2603.24652 • Published 9 days ago • 19