metadata
license: other
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- moe
- mixture-of-experts
- baseline
datasets:
- allenai/OLMoE-mix-0924
StdMoE_1b14b_1T
The architecture-matched standard MoE baseline released alongside EMO: Pretraining Mixture of Experts for Emergent Modularity — referred to as Reg. MoE (or "standard MoE") at 1T tokens in the paper.
1B-active / 14B-total parameter Mixture-of-Experts model (128 experts: 127 routed + 1 shared, k=8 active per token) pretrained on 1T tokens of the OLMoE pretraining mix and annealed for an additional 50B tokens with the standard MoE objective (no document-level expert pool constraint). Same architecture and training setup as Emo_1b14b_1T, differing only in the training objective.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "allenai/StdMoE_1b14b_1T"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
inputs = tokenizer(["Language modeling is "], return_tensors="pt", return_token_type_ids=False)
out = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7)
print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
Citation
@article{wang2026emo,
title = {EMO: Pretraining Mixture of Experts for Emergent Modularity},
author = {Wang, Ryan and Bhagia, Akshita and Min, Sewon},
year = {2026},
url = {https://arxiv.org/abs/2605.06663}
}