KALAVAI - Cooperative Decentralized LLM Training
Collection
Specialist models from arXiv 2603.22755. Cross-lingual, private-domain, and English specialists at 410M, 1B, and Qwen-2.5-1.5B scale. • 20 items • Updated
Fine-tuned Qwen/Qwen2.5-1.5B on Fiction data as part of the KALAVAI decentralized cooperative training protocol.
Code: https://github.com/mechramc/Kalavai
Qwen-2.5-1.5B specialists. MoE fusion: +1.06% ±0.01pp over best specialist (3 seeds). Mean divergence 3.16% — near floor of gain-divergence relationship.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mechramc/kalavai-qwen-fiction-specialist-seed42")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
This model is one specialist in a KALAVAI cooperative. To reproduce the MoE fusion results from the paper, load multiple domain specialists and combine them with a trained MoE router (see the paper and GitHub for details).
@article{kumaresan2026kalavai,
title = {{KALAVAI}: Predicting When Independent Specialist Fusion Works
--- A Quantitative Model for Post-Hoc Cooperative {LLM} Training},
author = {Kumaresan, Ramchand},
journal = {arXiv preprint arXiv:2603.22755},
year = {2026},
url = {https://arxiv.org/abs/2603.22755}
}
Base model
Qwen/Qwen2.5-1.5B