File size: 2,817 Bytes
1ed9524 f5d92a8 1ed9524 a61f36f 8e7ec59 1ed9524 8e7ec59 1ed9524 f5d92a8 1ed9524 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | ---
pretty_name: "DyT Composition Study Artifacts"
license: cc-by-4.0
language:
- en
tags:
- machine-learning
- transformers
- layernorm
- dynamic-tanh
- activation-bounding
- reproducibility
size_categories:
- n<1K
configs:
- config_name: artifact_index
default: true
data_files:
- split: train
path: data/artifact_index.jsonl
---
# DyT Composition Study Artifacts
[](https://arxiv.org/abs/2604.23434)
[](https://github.com/lucky-verma/dyt-composition-study)
[](https://creativecommons.org/licenses/by/4.0/)
This dataset contains sanitized result manifests and analysis outputs for
**[When Does Removing LayerNorm Help? Activation Bounding as a Regime-Dependent Implicit Regularizer](https://arxiv.org/abs/2604.23434)**.
DOI: https://doi.org/10.48550/arXiv.2604.23434
## Contents
The artifacts include aggregate training metrics, saturation measurements, statistical-test summaries, predictor-validation outputs, table-source manifests, and selected aggregate analysis files used by the public code repository.
The Dataset Viewer table is an index of the artifact files. The machine-readable result artifacts are stored under `results/`.
This is not a natural-language training dataset. It does not redistribute WikiText, OpenWebText, LAMBADA, BLIMP, model checkpoints, or raw training logs.
## Intended Use
Use this artifact bundle to:
- inspect the machine-readable results behind the paper;
- reproduce paper tables and consistency checks;
- compare DyT, LayerNorm, RMSNorm, HardTanh, DiffAttn, and related controls at the reported scales;
- audit provenance for reported quantitative claims.
## Limitations
- The experiments are compute-limited and below Chinchilla-optimal training.
- The included files are result artifacts, not full raw training traces or checkpoints.
- The saturation diagnostic should be treated as a per-deployment calibration cue, not a universal rule.
- Raw public datasets retain their original licenses and are not mirrored here.
## Licensing
The result artifacts in this dataset are released under CC BY 4.0.
The associated GitHub code is released under the MIT License.
## Citation
```bibtex
@misc{verma2026dytcomposition,
title = {When Does Removing LayerNorm Help? Activation Bounding as a Regime-Dependent Implicit Regularizer},
author = {Verma, Lucky},
year = {2026},
publisher = {arXiv},
doi = {10.48550/arXiv.2604.23434},
url = {https://arxiv.org/abs/2604.23434},
eprint = {2604.23434},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}
```
|