Abstract
Interleaved Head Attention addresses multi-head attention's limitations by enabling cross-head communication through pseudo-heads, improving reasoning performance on benchmark tasks.
Multi-Head Attention (MHA) is the core computational primitive underlying modern Large Language Models (LLMs). However, MHA suffers from a fundamental linear scaling limitation: H attention heads produce exactly H independent attention matrices, with no communication between heads during attention computation. This becomes problematic for multi-step reasoning, where correct answers depend on aggregating evidence from multiple parts of the context and composing latent token-to-token relations over a chain of intermediate inferences. To address this, we propose Interleaved Head Attention (IHA), which enables cross-head mixing by constructing P pseudo-heads per head (typically P=H), where each pseudo query/key/value is a learned linear combination of all H original queries, keys and values respectively. Interactions between pseudo-query and pseudo-key heads induce up to P^2 attention patterns per head with modest parameter overhead O(H^2P). We provide theory showing improved efficiency in terms of number of parameters on the synthetic Polynomial task (IHA uses Θ(kn^2) parameters vs. Θ(kn^2) for MHA) and on the synthetic order-sensitive CPM-3 task (IHA uses lceilN_{max}rceil heads vs. N_{max} for MHA). On real-world benchmarks, IHA improves Multi-Key retrieval on RULER by 10-20% (4k-16k) and, after fine-tuning for reasoning on OpenThoughts, improves GSM8K by 5.8% and MATH-500 by 2.8% (Majority Vote) over full attention.
Get this paper in your agent:
hf papers read 2602.21371 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper