lucky-verma's picture
Add artifact index for dataset viewer
f5d92a8 verified
---
pretty_name: "DyT Composition Study Artifacts"
license: cc-by-4.0
language:
- en
tags:
- machine-learning
- transformers
- layernorm
- dynamic-tanh
- activation-bounding
- reproducibility
size_categories:
- n<1K
configs:
- config_name: artifact_index
default: true
data_files:
- split: train
path: data/artifact_index.jsonl
---
# DyT Composition Study Artifacts
[![arXiv](https://img.shields.io/badge/arXiv-2604.23434-b31b1b.svg)](https://arxiv.org/abs/2604.23434)
[![GitHub](https://img.shields.io/badge/GitHub-code-black.svg)](https://github.com/lucky-verma/dyt-composition-study)
[![License: CC BY 4.0](https://img.shields.io/badge/License-CC--BY--4.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
This dataset contains sanitized result manifests and analysis outputs for
**[When Does Removing LayerNorm Help? Activation Bounding as a Regime-Dependent Implicit Regularizer](https://arxiv.org/abs/2604.23434)**.
DOI: https://doi.org/10.48550/arXiv.2604.23434
## Contents
The artifacts include aggregate training metrics, saturation measurements, statistical-test summaries, predictor-validation outputs, table-source manifests, and selected aggregate analysis files used by the public code repository.
The Dataset Viewer table is an index of the artifact files. The machine-readable result artifacts are stored under `results/`.
This is not a natural-language training dataset. It does not redistribute WikiText, OpenWebText, LAMBADA, BLIMP, model checkpoints, or raw training logs.
## Intended Use
Use this artifact bundle to:
- inspect the machine-readable results behind the paper;
- reproduce paper tables and consistency checks;
- compare DyT, LayerNorm, RMSNorm, HardTanh, DiffAttn, and related controls at the reported scales;
- audit provenance for reported quantitative claims.
## Limitations
- The experiments are compute-limited and below Chinchilla-optimal training.
- The included files are result artifacts, not full raw training traces or checkpoints.
- The saturation diagnostic should be treated as a per-deployment calibration cue, not a universal rule.
- Raw public datasets retain their original licenses and are not mirrored here.
## Licensing
The result artifacts in this dataset are released under CC BY 4.0.
The associated GitHub code is released under the MIT License.
## Citation
```bibtex
@misc{verma2026dytcomposition,
title = {When Does Removing LayerNorm Help? Activation Bounding as a Regime-Dependent Implicit Regularizer},
author = {Verma, Lucky},
year = {2026},
publisher = {arXiv},
doi = {10.48550/arXiv.2604.23434},
url = {https://arxiv.org/abs/2604.23434},
eprint = {2604.23434},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}
```