Strip narrative README; keep minimal stub
Browse files
README.md
CHANGED
|
@@ -1,143 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
language: en
|
| 3 |
license: mit
|
| 4 |
-
library_name: pytorch
|
| 5 |
-
tags:
|
| 6 |
-
- sparse-autoencoder
|
| 7 |
-
- temporal-sae
|
| 8 |
-
- crosscoder
|
| 9 |
-
- interpretability
|
| 10 |
-
- temporal-crosscoder
|
| 11 |
-
pipeline_tag: feature-extraction
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
Trained-checkpoint artifact for the [temp_xc](https://github.com/chainik1125/temp_xc)
|
| 17 |
-
research repository. These are **Sparse Autoencoder / Crosscoder**
|
| 18 |
-
checkpoints trained on `google/gemma-2-2b-it` layer 13 residual stream
|
| 19 |
-
activations over 24 000 FineWeb sequences.
|
| 20 |
-
|
| 21 |
-
See `summary.md` and the autoresearch plan in the code repo for the
|
| 22 |
-
full story. This HF repo only hosts the binary `.pt` checkpoints;
|
| 23 |
-
all training / probing / plotting code lives in the github repo.
|
| 24 |
-
|
| 25 |
-
## Quick start
|
| 26 |
-
|
| 27 |
-
Download all ckpts:
|
| 28 |
-
|
| 29 |
-
```bash
|
| 30 |
-
pip install huggingface_hub
|
| 31 |
-
huggingface-cli download han1823123123/txcdr --local-dir ./ckpts
|
| 32 |
-
```
|
| 33 |
-
|
| 34 |
-
Or a single file:
|
| 35 |
-
|
| 36 |
-
```bash
|
| 37 |
-
huggingface-cli download han1823123123/txcdr \
|
| 38 |
-
ckpts/txcdr_contrastive_t5__seed42.pt \
|
| 39 |
-
--local-dir ./ckpts
|
| 40 |
-
```
|
| 41 |
-
|
| 42 |
-
## Loading a checkpoint
|
| 43 |
-
|
| 44 |
-
Each `.pt` is saved as a dict with keys
|
| 45 |
-
`{"state_dict", "arch", "meta", "state_dict_dtype"}`. The
|
| 46 |
-
`state_dict` is in **fp16** to halve disk cost; the loader in
|
| 47 |
-
`run_probing.py` casts it back to fp32 when building the model. Example
|
| 48 |
-
(from `experiments/phase5_downstream_utility/probing/run_probing.py`):
|
| 49 |
-
|
| 50 |
-
```python
|
| 51 |
-
import torch
|
| 52 |
-
from src.architectures.crosscoder import TemporalCrosscoder
|
| 53 |
-
|
| 54 |
-
state = torch.load("ckpts/txcdr_t5__seed42.pt",
|
| 55 |
-
map_location="cuda", weights_only=False)
|
| 56 |
-
meta = state["meta"]
|
| 57 |
-
T = meta["T"] # e.g. 5
|
| 58 |
-
k_eff = meta["k_win"] or (meta["k_pos"] * T)
|
| 59 |
-
model = TemporalCrosscoder(d_in=2304, d_sae=18432, T=T, k=k_eff).cuda()
|
| 60 |
-
cast = {k: v.float() if v.dtype == torch.float16 else v
|
| 61 |
-
for k, v in state["state_dict"].items()}
|
| 62 |
-
model.load_state_dict(cast)
|
| 63 |
-
model.eval()
|
| 64 |
-
```
|
| 65 |
-
|
| 66 |
-
For other architectures, the corresponding class lives under
|
| 67 |
-
`src/architectures/` in the github repo. The `arch` field in each
|
| 68 |
-
ckpt's state tells you which class to instantiate; the `meta` dict
|
| 69 |
-
carries the constructor arguments.
|
| 70 |
-
|
| 71 |
-
## Contents
|
| 72 |
-
|
| 73 |
-
### Phase 5 canonical benchmark (seed 42, 25 archs)
|
| 74 |
-
|
| 75 |
-
Trained on the full 6 000-sequence GPU-preloaded subset, plateau-stop
|
| 76 |
-
at <2 %/1k-step loss drop, Adam lr=3e-4, batch=1024, max 25 000 steps.
|
| 77 |
-
Probed on 36 binary tasks at `last_position` and `mean_pool`
|
| 78 |
-
aggregations — see `docs/han/research_logs/phase5_downstream_utility/summary.md`
|
| 79 |
-
in the github repo for the leaderboard.
|
| 80 |
-
|
| 81 |
-
| family | arch(s) |
|
| 82 |
-
|---|---|
|
| 83 |
-
| Token SAE | `topk_sae` |
|
| 84 |
-
| Layer crosscoder | `mlc`, `mlc_contrastive` |
|
| 85 |
-
| Temporal crosscoder (T-sweep) | `txcdr_t{2,3,5,8,10,15,20}` |
|
| 86 |
-
| Stacked per-position | `stacked_t{5,20}` |
|
| 87 |
-
| Matryoshka (position-nested) | `matryoshka_t5` |
|
| 88 |
-
| Weight-sharing ablation | `txcdr_shared_dec_t5`, `txcdr_shared_enc_t5`, `txcdr_tied_t5`, `txcdr_pos_t5`, `txcdr_causal_t5` |
|
| 89 |
-
| Sparse-structure variant | `txcdr_block_sparse_t5` |
|
| 90 |
-
| Decoder rank variant | `txcdr_lowrank_dec_t5`, `txcdr_rank_k_dec_t5` |
|
| 91 |
-
| Time-contrastive (Ye et al. 2025) | `temporal_contrastive` (single-token) |
|
| 92 |
-
| Time × Layer (novel) | `time_layer_crosscoder_t5` (d_sae=8192) |
|
| 93 |
-
| TFA | `tfa_small`, `tfa_pos_small` (d_sae=4096, seq_len=32) |
|
| 94 |
-
|
| 95 |
-
### Phase 5.7 autoresearch archs (seed 42)
|
| 96 |
-
|
| 97 |
-
Novel architectures explored after the 25-arch benchmark. See
|
| 98 |
-
`docs/han/research_logs/phase5_downstream_utility/2026-04-21-phase5_7-architectures.md`
|
| 99 |
-
for design details.
|
| 100 |
-
|
| 101 |
-
**Tier-1 / Tier-2 candidates:**
|
| 102 |
-
|
| 103 |
-
| arch | role | status |
|
| 104 |
-
|---|---|---|
|
| 105 |
-
| `txcdr_contrastive_t5` | A2: TXCDR + Matryoshka H/L + InfoNCE on adjacent windows | FINALIST |
|
| 106 |
-
| `matryoshka_txcdr_contrastive_t5` | A3: position-nested Matryoshka + InfoNCE on scale-1 prefix | FINALIST |
|
| 107 |
-
| `txcdr_rotational_t5` | A1: rank-K Lie-group decoder | DISCARD |
|
| 108 |
-
| `txcdr_basis_expansion_t5` | A5: decoder as K=3 basis combination | DISCARD |
|
| 109 |
-
| `mlc_temporal_t3` | A4: MLC with shared-across-time encoder | DISCARD |
|
| 110 |
-
| `time_layer_contrastive_t5` | A10: time_layer_crosscoder + InfoNCE on (T,L)-mean prefix | AMBIGUOUS |
|
| 111 |
-
| `txcdr_dynamics_t5` | A8: recurrent sparse latent with per-feature gate | DISCARD |
|
| 112 |
-
|
| 113 |
-
**Part B α-sweep variants (seed 42):**
|
| 114 |
-
|
| 115 |
-
| arch | α | role |
|
| 116 |
-
|---|---|---|
|
| 117 |
-
| `txcdr_contrastive_t5_alpha003` | 0.03 | A2 under-weight |
|
| 118 |
-
| `txcdr_contrastive_t5_alpha100` | 1.00 | A2 paper-default |
|
| 119 |
-
| `txcdr_contrastive_t5_k2x` | 0.10 | A2 at k_win=1000 |
|
| 120 |
-
| `matryoshka_txcdr_contrastive_t5_alpha003` | 0.03 | A3 under-weight |
|
| 121 |
-
| `matryoshka_txcdr_contrastive_t5_alpha100` | 1.00 | A3 paper-default |
|
| 122 |
-
| `matryoshka_txcdr_contrastive_t5_k2x` | 0.10 | A3 at k_win=1000 |
|
| 123 |
-
|
| 124 |
-
**T17 seed-variance checkpoints (in-progress fragment):**
|
| 125 |
-
a few additional `__seed{1,2,3}` variants exist on some archs from an
|
| 126 |
-
early seed-variance experiment that was scrapped mid-flight in favour
|
| 127 |
-
of Phase 5.7 autoresearch. See the autoresearch plan for context.
|
| 128 |
-
|
| 129 |
-
## Reproduction
|
| 130 |
-
|
| 131 |
-
For a full from-scratch reproduction (tokenise FineWeb → build 5-layer
|
| 132 |
-
activation cache → build probe cache → train 25 archs → probe →
|
| 133 |
-
headline plots → T-sweep plot), see
|
| 134 |
-
`docs/han/research_logs/phase5_downstream_utility/2026-04-21-reproduction-brief.md`
|
| 135 |
-
in the github repo. ~120 GB disk + ~12-15 h compute on an A40-class GPU.
|
| 136 |
-
|
| 137 |
-
## Citation
|
| 138 |
-
|
| 139 |
-
No paper yet; work-in-progress for NeurIPS submission.
|
| 140 |
-
|
| 141 |
-
## License
|
| 142 |
-
|
| 143 |
-
MIT. See LICENSE in the github repo.
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
+
See https://github.com/chainik1125/temp_xc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|