Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - dense
9
+ - baseline
10
+ - ablation
11
+ - memory-matched
12
+ datasets:
13
+ - allenai/OLMoE-mix-0924
14
+ ---
15
+
16
+ # Dense_1b_130B
17
+
18
+ An active-parameter-matched dense baseline released alongside [EMO: Pretraining Mixture of Experts for Emergent Modularity](https://arxiv.org/abs/2605.06663) — referred to as **"Dense @ 8"** in Figure 1 of the paper. Not midtrained.
19
+
20
+ 1B parameter dense decoder-only Transformer (no MoE) pretrained from scratch on 130B tokens of the OLMoE pretraining mix. Provides an active-parameter-matched comparison point against 8-expert subsets carved out of the larger 1B/14B EMO models.
21
+
22
+ ## Usage
23
+
24
+ ```python
25
+ from transformers import AutoModelForCausalLM, AutoTokenizer
26
+
27
+ model_id = "allenai/Dense_1b_130B"
28
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
29
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
30
+
31
+ inputs = tokenizer(["Language modeling is "], return_tensors="pt", return_token_type_ids=False)
32
+ out = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7)
33
+ print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
34
+ ```
35
+
36
+ ## Citation
37
+
38
+ ```bibtex
39
+ @article{wang2026emo,
40
+ title = {EMO: Pretraining Mixture of Experts for Emergent Modularity},
41
+ author = {Wang, Ryan and Bhagia, Akshita and Min, Sewon},
42
+ year = {2026},
43
+ url = {https://arxiv.org/abs/2605.06663}
44
+ }
45
+ ```
46
+
47
+ ## Links
48
+
49
+ - Paper: https://arxiv.org/abs/2605.06663
50
+ - Code: https://github.com/allenai/EMO