ryanyxw commited on
Commit
c3b7fae
·
verified ·
1 Parent(s): 297d2ba

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - moe
9
+ - mixture-of-experts
10
+ - baseline
11
+ - ablation
12
+ - memory-matched
13
+ datasets:
14
+ - allenai/OLMoE-mix-0924
15
+ ---
16
+
17
+ # StdMoE_1b4b_130B
18
+
19
+ A memory-matched standard MoE baseline released alongside [EMO: Pretraining Mixture of Experts for Emergent Modularity](https://arxiv.org/abs/2605.06663) — referred to as **"Reg. MoE @ 32"** in Figure 1 of the paper. Not midtrained.
20
+
21
+ 1B-active / 4B-total parameter Mixture-of-Experts model (32 routed experts + 1 shared, k=8 active per token) pretrained from scratch on 130B tokens of the OLMoE pretraining mix with the standard MoE objective. Provides a memory-matched comparison point against 32-expert subsets carved out of the larger 1B/14B EMO models.
22
+
23
+ ## Usage
24
+
25
+ ```python
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+
28
+ model_id = "allenai/StdMoE_1b4b_130B"
29
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
30
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
31
+
32
+ inputs = tokenizer(["Language modeling is "], return_tensors="pt", return_token_type_ids=False)
33
+ out = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7)
34
+ print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
35
+ ```
36
+
37
+ ## Citation
38
+
39
+ ```bibtex
40
+ @article{wang2026emo,
41
+ title = {EMO: Pretraining Mixture of Experts for Emergent Modularity},
42
+ author = {Wang, Ryan and Bhagia, Akshita and Min, Sewon},
43
+ year = {2026},
44
+ url = {https://arxiv.org/abs/2605.06663}
45
+ }
46
+ ```
47
+
48
+ ## Links
49
+
50
+ - Paper: https://arxiv.org/abs/2605.06663
51
+ - Code: https://github.com/allenai/EMO