KALAVAI - Cooperative Decentralized LLM Training
Collection
Specialist models from arXiv 2603.22755. Cross-lingual, private-domain, and English specialists at 410M, 1B, and Qwen-2.5-1.5B scale. β’ 20 items β’ Updated
Fine-tuned EleutherAI/pythia-410m on Welsh data as part of the KALAVAI decentralized cooperative training protocol.
Yoruba PPL 41.9β7.7 (5.4Γ), Welsh 102.7β22.1 (4.6Γ), Tamil 4.2β3.0. MoE fusion of 4 specialists: +21.76% over best specialist (seeds 137+2026).
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mechramc/kalavai-cross-lingual-welsh-specialist-seed137")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-410m")
This model is one specialist in a KALAVAI cooperative. To reproduce the MoE fusion results from the paper, load multiple domain specialists and combine them with a trained MoE router (see the paper and GitHub for details).
@article{kumaresan2026kalavai,
title = {{KALAVAI}: Predicting When Independent Specialist Fusion Works
--- A Quantitative Model for Post-Hoc Cooperative {LLM} Training},
author = {Kumaresan, Ramchand},
journal = {arXiv preprint arXiv:2603.22755},
year = {2026},
url = {https://arxiv.org/abs/2603.22755}
}
Base model
EleutherAI/pythia-410m