Datasets:
text stringlengths 33.6k 187k | source stringclasses 7
values | sequence_avg_median_lookback float64 25.5 2.6k ⌀ | sequence_avg_max_lookback float64 423 7.02k ⌀ |
|---|---|---|---|
\section{Introduction}
Wireless communication systems are challenged to
fulfill novel demanding requirements and provide wider coverage for all users, whose number is constantly increasing.
For that reason, there is a need to introduce novel techniques, and/or to combine existing ones.
With regard to radio-frequency (... | arxiv | 2,602.905762 | 5,204.026855 |
\subsection{Detailed Dataset Statistics}
Table \ref{tab:statistics} demonstrates the statistic information of different splits of our proposed \textit{AntCritic}. The upper part demonstrates the quantitative characteristics of texts, and the lower part shows the percentage of segments that contains the specified HTML t... | arxiv | 2,479.429932 | 5,111.155273 |
"\\section{Introduction}\n\nA considerable amount of evidence suggests that $\\sim$84\\% of the tota(...TRUNCATED) | arxiv | 2,458.992432 | 5,342.897461 |
"\\section{Introduction}\n\nChinese may use different Chinese characters up to 20,000 so that it is (...TRUNCATED) | arxiv | 2,370.889893 | 4,947.111328 |
"\\section{Introduction}\nPolitical conversations between everyday people form the foundation of a h(...TRUNCATED) | arxiv | 2,361.234619 | 4,779.759766 |
"\\section{Introduction}\r\n\r\nSystems of two-dimensional (2D) electrons close to van Hove (vH)\r\n(...TRUNCATED) | arxiv | 2,326.185791 | 5,314.365723 |
"\\section{INTRODUCTION}\n\nHyperledger Fabric is a popular permissionless system that allows the de(...TRUNCATED) | arxiv | 2,286.673828 | 4,869.537109 |
"\\section*{Introduction}\nOrdered dynamical phases of motile organisms are ubiquitous in nature acr(...TRUNCATED) | arxiv | 2,203.792969 | 5,109.62793 |
"\\section{Introduction}\n\\label{Introduction}\n\n\\subsection{Time and Clocks}\n\\label{TimeObserv(...TRUNCATED) | arxiv | 2,183.296143 | 5,366.569824 |
"\\section{Introduction}\n\tStructural analysis represents the set of theories and methods which all(...TRUNCATED) | arxiv | 2,171.375488 | 4,668.36377 |
End of preview. Expand in Data Studio
Attention-Labeled 16K Documents
Documents labeled by attention lookback distance at 16K context length, used for attention-based data filtering for long-context language model training.
Overview
Each document was processed through Qwen2.5-Coder-7B-Instruct at 16,384 tokens, and the attention pattern at layer 27 was analyzed to measure how far back the model looks when processing the text. Documents where the model attends to distant tokens (high lookback) are labeled positive/filtered — these are hypothesized to be better training data for teaching long-context capabilities.
Files
| File | Rows | Description |
|---|---|---|
filtered_128.parquet |
1,164 | Positive — high attention lookback (threshold ≥ 128 tokens), diverse sources |
filtered_300.parquet |
168 | Positive — stricter threshold (≥ 300 tokens) |
negative_128.parquet |
1,085 | Negative — low attention lookback (below 128-token threshold) |
negative_300.parquet |
168 | Negative — matched count for the 300-token threshold |
random_match128.parquet |
1,164 | Random baseline — count-matched to filtered_128 |
random_match300.parquet |
168 | Random baseline — count-matched to filtered_300 |
Fields
| Field | Type | Description |
|---|---|---|
text |
string | Full document text (all files) |
source |
string | Data source domain (filtered/random files only) |
sequence_avg_median_lookback |
float | Median attention lookback distance averaged across sequence positions (filtered/random files only) |
sequence_avg_max_lookback |
float | Max attention lookback distance averaged across sequence positions (filtered/random files only) |
Source Domains
Documents were drawn from a pool of long-form documents (≥16K tokens) from multiple domains:
| Domain | Description |
|---|---|
code |
Source code from The Stack |
arxiv |
Academic papers |
web |
FineWeb web text |
legal |
Legal documents (MultiLexSum) |
books |
Books (PG-19) |
encyclopedia |
Wikipedia |
government |
Government reports |
Labeling Details
- Model: Qwen2.5-Coder-7B-Instruct
- Context length: 16,384 tokens
- Attention layer: 27 (final layer)
- Metric: Sequence-level average median lookback distance
- Block size: 4096 query × 4096 key
- Filler tokens: 128 (added to probe long-range attention)
Usage
from datasets import load_dataset
# Load all splits
ds = load_dataset("KevinDavidHayes/labeled-16k-attention")
# Or load specific files
filtered = load_dataset("KevinDavidHayes/labeled-16k-attention", data_files="data/filtered_128.parquet")
Citation
If you use this dataset, please cite our work on attention-based data filtering for long-context training.
- Downloads last month
- 46