KALAVAI - Cooperative Decentralized LLM Training
Collection
Specialist models from arXiv 2603.22755. Cross-lingual, private-domain, and English specialists at 410M, 1B, and Qwen-2.5-1.5B scale. • 20 items • Updated
Fine-tuned EleutherAI/pythia-410m on Fiction data as part of the KALAVAI decentralized cooperative training protocol.
Phase 1 English domains. MoE fusion: +7.72% ±0.02pp over best specialist (3 seeds). Mean divergence 15.65%.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mechramc/kalavai-phase1-410m-fiction-specialist-seed42")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-410m")
This model is one specialist in a KALAVAI cooperative. To reproduce the MoE fusion results from the paper, load multiple domain specialists and combine them with a trained MoE router (see the paper and GitHub for details).
@article{kalavai2026,
title={KALAVAI: Cooperative Decentralized LLM Training via MoE Fusion},
author={[Authors]},
journal={arXiv preprint arXiv:2603.22755},
year={2026}
}
Base model
EleutherAI/pythia-410m