license: cc-by-4.0
task_categories:
- feature-extraction
language:
- en
tags:
- transformer
- attention
- rope
- power-law
- scaling-laws
- interpretability
- llm
- benchmark
pretty_name: TAF Attention-Decay Measurements
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: taf-attention-decay.jsonl
TAF Attention-Decay Measurements
First public dataset of attention-decay exponent γ measurements across transformer LLMs. Companion to the Thermodynamic Attention Framework (TAF) papers by Carles Marín (2026):
- Paper I: 10.5281/zenodo.19826343 — Predicting How Transformers Attend
- Paper II: 10.5281/zenodo.19960573 — A Six-Axis Decomposition with the Learned Imprint, Sink-Dominated Precision Boundaries, Bimodal Phase Structure, and Honest Revisions
What it is
Each record is one γ measurement on one (model, corpus, precision) tuple. γ is the exponent of the power-law decay of attention weights at distance d:
A(d) ∝ d^(-γ)
predicted from RoPE geometry by the closed-form Padé formula
γ_padé = (2θ - T√2) / (2θ + T√2)
where θ is the RoPE base frequency and T is the evaluation context length.
Coverage
- 35 models across 13 families (Pythia, Qwen, Llama, Mistral, Gemma, Phi, OLMo, OLMoE, DeepSeek, StarCoder2, CodeLlama, GPT-J, SmolLM2, Falcon, Yi)
- 88 records total
- 2 corpora: real text (
real_text, MongoDB English episodes) + random tokens (random_tokens) - 2 precisions: 4-bit NF4 (BitsAndBytes) + bfloat16
- Includes random-init controls (E2 falsifier on Pythia 70M/410M/1B with random Gaussian init, no pretraining) — establishes that the slope ν = ∂γ/∂log₁₀(P) ≈ −1/(2π) is genuinely a training imprint, not architecture artifact.
- Pythia-70M training trajectory (9 checkpoints × 2 corpora = 18 records, sesion 32) — within-model γ across
step1000→step143000. Honest null result: trajectory does NOT converge to ν = −1/(2π); imprint constant emerges across-models, not within-model. - Pythia-31m high-n robustness (n=60 prompts × 2 corpora = 2 records) — tightens CI on smallest pythia anchor.
- Yi-9B random_tokens (n=30) — fills 9B class gap in family panel.
- R²-direction rule extension (sesion 32 v2, 2026-05-02): 6 new bf16/4-bit paired measurements (Pythia-410M, Pythia-1.4B, StarCoder2-3B, Mistral-7B base + Instruct, Qwen2.5-7B base). Brings R²-direction rule panel from $n=5$ to $n=8$ paired (7/8 sign-correct; StarCoder2-3B is the new outlier).
Schema
Each JSONL row:
{
"model_id": "EleutherAI/pythia-2.8b",
"revision": "main",
"arch": {
"d_model": 2560, "n_heads": 32, "n_layers": 32, "d_head": 80,
"n_kv_heads": 32, "n_params_M": 2800, "rope_theta": 10000,
"T_train": 2048, "family": "pythia",
"is_instruct": false, "is_moe": false
},
"measurement": {
"gamma": 0.674,
"gamma_ci95_lo": 0.65, "gamma_ci95_hi": 0.70,
"method": "pade_d_alias_T",
"fit": {"log_A": -3.21, "R2": 0.987, "n_points": 9, "delta_R2_power_minus_exp": 0.42},
"T_eval": 2048,
"corpus": "real_text",
"n_prompts_per_distance": 150,
"seeds": [42, 123, 7],
"distances": [10, 20, 30, 50, 100, 200, 500, 1000, 2000],
"precision": "4-bit-NF4"
},
"predictions": {
"gamma_pade": 0.747,
"gamma_random_pred": null,
"imprint_constant_nu": -0.1592
},
"decision": "MED gamma=0.674 (R²=0.987)",
"provenance": {
"taf_version": "0.4",
"paper_doi": "10.5281/zenodo.19826343",
"source_file": "EleutherAI--pythia-2.8b_mongo.json",
"tool": "tafagent/cli/diagnose_model.py + e4_extended_gamma.py",
"license_data": "CC-BY-4.0",
"license_code": "Apache-2.0"
}
}
Usage
from datasets import load_dataset
ds = load_dataset("karlexmarin/taf-attention-decay")
print(ds["train"][0])
import pandas as pd
df = pd.read_json("taf-attention-decay.jsonl", lines=True)
df_text = df[df["measurement"].apply(lambda m: m["corpus"] == "real_text")]
df_text["gamma"] = df_text["measurement"].apply(lambda m: m["gamma"])
print(df_text.groupby("arch")["gamma"].describe())
Why this dataset exists
The attention-decay exponent γ is a single-number diagnostic of how "locally" or "globally" a transformer attends. It connects RoPE geometry to long-context behavior, KV-cache compression, NIAH retrieval, and hallucination rates — see the companion paper for details.
Until now, no public dataset of γ measurements existed across LLMs. This release closes that gap.
What's NOT in this dataset
- Raw attention tensors (TB-scale, redundant with model weights)
- Per-layer per-head γ-fields (separate dataset planned)
- Training-trajectory γ over checkpoints (Pythia-70M trajectory now INCLUDED as of sesion 32; broader panel still planned)
- Downstream task scores (use RULER, LongBench-v2, HELM separately)
License
- Data (this dataset): CC-BY-4.0
- Measurement code: Apache-2.0 (github.com/karlesmarin/tafagent)
- Underlying model weights: respective HuggingFace licenses (consult each model's card)
Citation
@dataset{marin2026taf_attention_decay,
author = {Mar{\'\i}n, Carles},
title = {TAF Attention-Decay Measurements},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/karlexmarin/taf-attention-decay},
license = {CC-BY-4.0}
}
@article{marin2026predicting,
author = {Mar{\'\i}n, Carles},
title = {Predicting How Transformers Attend: Analytic Power-Law Theory,
Phase Transitions, and Practical Compression Tools},
year = {2026},
doi = {10.5281/zenodo.19826343},
url = {https://zenodo.org/records/19826343}
}
@article{marin2026taf2,
author = {Mar{\'\i}n, Carles},
title = {Predicting How Transformers Attend, Part II: A Six-Axis
Decomposition with the Learned Imprint $\nu = -1/(2\pi)$,
Sink-Dominated Precision Boundaries, Bimodal Phase Structure,
and Honest Revisions},
year = {2026},
publisher = {Zenodo},
doi = {10.5281/zenodo.19960573},
url = {https://doi.org/10.5281/zenodo.19960573}
}
Acknowledgements
This dataset would not exist without:
- EleutherAI for the Pythia panel (8 sizes from 14M to 2.8B), the primary scientific anchor of the framework.
- AI2 for OLMo / OLMoE.
- Meta, Mistral AI, Qwen team / Alibaba, Google DeepMind, Microsoft, HuggingFace SmolLM team, DeepSeek-AI, TII (Falcon), and BigScience (BLOOM) for releasing weights publicly.
- The HuggingFace Hub for free hosting that made the measurements possible.
Reproducibility
The measurement protocol is fully open:
- Tool: github.com/karlesmarin/tafagent,
cli/diagnose_model.py - Browser tool: karlesmarin.github.io/tafagent
Each row in this dataset can be reproduced from the original model weights via the open tool. If you find a discrepancy, please open an issue at the GitHub repo — refutations are welcome.
Updates
- 2026-04-29: Initial release (58 records, 32 models, 2 corpora, 2 precisions)
- 2026-04-30: Added
analysis/games/game_O_results.json+game_P_results.json(hyperscaling identities + recursive derivations) - 2026-05-01: ★ Sesion 32 paper 2 strengthening — +21 records (79 total):
- Pythia-70M ν trajectory (9 ckpts × 2 corpora = 18) — within-model null documented
- Yi-9B random_tokens (1) — 9B class gap filled
- Pythia-31m high-n robustness (2, n=60 each) — tightened CI on smallest anchor
- 2026-05-02: ★ Paper II released on Zenodo (DOI 10.5281/zenodo.19960573) + 9 new records (88 total):
- 3 bf16/4-bit pairs (Pythia-410M, Pythia-1.4B, StarCoder2-3B) — R²-direction rule extension
- Mistral-7B base + Instruct (4-bit) — F9 RLHF pair, finds Δγ_RLHF = −0.133
- Qwen2.5-7B base (4-bit) — completes the Qwen GQA RLHF pair
- 2026-05-01: ★ TAF v0.5 machine-verified consistency — 15 algebraic identities
of TAF critical exponents formally proven via dual-tool approach:
- Sage Groebner basis (algebraic decision in PolynomialRing(ℚ))
- Lean Mathlib4 (dependent type theory, 1973/1973 jobs build success)
- Including ★★ D-SAGE-1:
2η² + η·γ_χ + 1 = 0quadratic identity - Paper 1 erratum: η = 2γ refuted algebraically; correct η = γ−1
- First transformer-attention framework with formal machine-proof backing
- Future: training-trajectory data (Pythia checkpoint γ-flow), per-layer γ-fields, fp16 anchor measurements (DeepSeek-chat verification, Llama-3-8B cross-paper anchor)
Machine-verification artifacts
For independent verification of TAF critical exponent identities:
# Sage verification (~30s)
docker run --rm -v "$(pwd)/analysis:/work" sagemath/sagemath:latest \
sage /work/sage_recursive_sweep_2026-04-30.sage
# Lean Mathlib4 verification (~10min first time, cached)
docker run --rm -v "$(pwd)/lean_taf:/work" \
leanprovercommunity/lean:latest \
-c "cd /work/taf && lake build"
- Sage results:
analysis/sage_recursive_sweep_results.json - Sage script: tafagent repo (when uploaded)
- Lean code:
lean_taf/taf/Taf/Identities.lean - Paper 2 appendix A.4: step-by-step proofs of all 15 identities