ryanyxw commited on
Commit
bdcde02
·
verified ·
1 Parent(s): 9e5c7bb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - moe
9
+ - mixture-of-experts
10
+ - modularity
11
+ - index
12
+ datasets:
13
+ - allenai/OLMoE-mix-0924
14
+ ---
15
+
16
+ # EMO: Pretraining Mixture of Experts for Emergent Modularity
17
+
18
+ This page is an **index** for the model checkpoints released alongside [EMO: Pretraining Mixture of Experts for Emergent Modularity](https://arxiv.org/abs/2605.06663). The repository at `allenai/EMO` does not host model weights — pick the checkpoint you want from the table below.
19
+
20
+ ## Released models
21
+
22
+ ### Main release
23
+
24
+ | Model | Description |
25
+ |---|---|
26
+ | [`allenai/Emo_1b14b_1T`](https://huggingface.co/allenai/Emo_1b14b_1T) | **EMO** — 1B-active / 14B-total MoE pretrained on 1T tokens + 50B-token midtraining anneal. The main model from the paper. |
27
+
28
+ ### Ablation: EMO at smaller scale
29
+
30
+ | Model | Description |
31
+ |---|---|
32
+ | [`allenai/Emo_1b14b_130B`](https://huggingface.co/allenai/Emo_1b14b_130B) | EMO trained on 130B tokens (Table 1 / Figure 11 ablation). Not midtrained. |
33
+
34
+ ### Architecture-matched standard MoE baselines
35
+
36
+ These share architecture and data with the EMO models above; only the training objective differs (no document-level expert pool constraint).
37
+
38
+ | Model | Description |
39
+ |---|---|
40
+ | [`allenai/StdMoE_1b14b_1T`](https://huggingface.co/allenai/StdMoE_1b14b_1T) | Standard MoE — *Reg. MoE* at 1T tokens in the paper. Same setup as `Emo_1b14b_1T`. |
41
+ | [`allenai/StdMoE_1b14b_130B`](https://huggingface.co/allenai/StdMoE_1b14b_130B) | Standard MoE — *Reg. MoE* at 130B tokens. Same setup as `Emo_1b14b_130B`. |
42
+
43
+ ### Memory-matched baselines (Figure 1)
44
+
45
+ Smaller models trained from scratch at fixed memory budgets, used as comparison points for EMO expert subsets.
46
+
47
+ | Model | Description |
48
+ |---|---|
49
+ | [`allenai/Dense_1b_130B`](https://huggingface.co/allenai/Dense_1b_130B) | **Dense @ 8** — 1B dense decoder-only Transformer trained on 130B tokens. Active-parameter-matched with 8-expert subsets of the larger EMO/StdMoE models. |
50
+ | [`allenai/StdMoE_1b4b_130B`](https://huggingface.co/allenai/StdMoE_1b4b_130B) | **Reg. MoE @ 32** — 1B-active / 4B-total standard MoE (32 routed experts) trained from scratch on 130B tokens. Memory-matched with 32-expert subsets. |
51
+
52
+ ### EMO-anneal ablation (Appendix B.4)
53
+
54
+ Tests whether modularity can be induced *after* pretraining by annealing a standard MoE under the EMO objective.
55
+
56
+ | Model | Description |
57
+ |---|---|
58
+ | [`allenai/StdMoE_1b14b_1T_Preanneal`](https://huggingface.co/allenai/StdMoE_1b14b_1T_Preanneal) | Standard MoE pretrained on 1T tokens, no annealing. The starting point for the EMO-anneal experiment. |
59
+ | [`allenai/StdMoE_1b14b_1T_EmoAnnealed`](https://huggingface.co/allenai/StdMoE_1b14b_1T_EmoAnnealed) | **EMO-anneal** — `StdMoE_1b14b_1T_Preanneal` annealed for 50B tokens under the EMO document-level expert pool objective. |
60
+
61
+ ## Quick start
62
+
63
+ All checkpoints require `trust_remote_code=True` since they use custom modeling code from the [ryanyxw/transformers](https://github.com/ryanyxw/transformers/tree/flexmoe_v4_57_1) fork. Replace `model_id` with the checkpoint you want from the table above.
64
+
65
+ ```python
66
+ from transformers import AutoModelForCausalLM, AutoTokenizer
67
+
68
+ model_id = "allenai/Emo_1b14b_1T" # main EMO release
69
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
70
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
71
+
72
+ inputs = tokenizer(["Language modeling is "], return_tensors="pt", return_token_type_ids=False)
73
+ out = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7)
74
+ print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
75
+ ```
76
+
77
+ ## Citation
78
+
79
+ ```bibtex
80
+ @article{wang2026emo,
81
+ title = {EMO: Pretraining Mixture of Experts for Emergent Modularity},
82
+ author = {Wang, Ryan and Bhagia, Akshita and Min, Sewon},
83
+ year = {2026},
84
+ url = {https://arxiv.org/abs/2605.06663}
85
+ }
86
+ ```
87
+
88
+ ## Links
89
+
90
+ - Paper: https://arxiv.org/abs/2605.06663
91
+ - Code: https://github.com/allenai/EMO
92
+ - Visualization: https://emovisualization.netlify.app