StdMoE_1b4b_130B / README.md
akshitab's picture
Create README.md (#1)
a8d5411
metadata
license: other
language:
  - en
library_name: transformers
pipeline_tag: text-generation
tags:
  - moe
  - mixture-of-experts
  - baseline
  - ablation
  - memory-matched
datasets:
  - allenai/OLMoE-mix-0924

StdMoE_1b4b_130B

A memory-matched standard MoE baseline released alongside EMO: Pretraining Mixture of Experts for Emergent Modularity — referred to as "Reg. MoE @ 32" in Figure 1 of the paper. Not midtrained.

1B-active / 4B-total parameter Mixture-of-Experts model (32 routed experts + 1 shared, k=8 active per token) pretrained from scratch on 130B tokens of the OLMoE pretraining mix with the standard MoE objective. Provides a memory-matched comparison point against 32-expert subsets carved out of the larger 1B/14B EMO models.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "allenai/StdMoE_1b4b_130B"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)

inputs = tokenizer(["Language modeling is "], return_tensors="pt", return_token_type_ids=False)
out = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7)
print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])

Citation

@article{wang2026emo,
  title  = {EMO: Pretraining Mixture of Experts for Emergent Modularity},
  author = {Wang, Ryan and Bhagia, Akshita and Min, Sewon},
  year   = {2026},
  url    = {https://arxiv.org/abs/2605.06663}
}

Links