KALAVAI β€” Code Specialist (pythia-410m, seed 2026)

Fine-tuned EleutherAI/pythia-410m on Code data as part of the KALAVAI decentralized cooperative training protocol.

Paper results

Yoruba PPL 41.9β†’7.7 (5.4Γ—), Welsh 102.7β†’22.1 (4.6Γ—), Tamil 4.2β†’3.0. MoE fusion of 4 specialists: +21.76% over best specialist (seeds 137+2026).

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("mechramc/kalavai-cross-lingual-code-specialist-seed2026")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-410m")

This model is one specialist in a KALAVAI cooperative. To reproduce the MoE fusion results from the paper, load multiple domain specialists and combine them with a trained MoE router (see the paper and GitHub for details).

Citation

@article{kumaresan2026kalavai,
  title     = {{KALAVAI}: Predicting When Independent Specialist Fusion Works
               --- A Quantitative Model for Post-Hoc Cooperative {LLM} Training},
  author    = {Kumaresan, Ramchand},
  journal   = {arXiv preprint arXiv:2603.22755},
  year      = {2026},
  url       = {https://arxiv.org/abs/2603.22755}
}
Downloads last month
405
Safetensors
Model size
0.4B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mechramc/kalavai-cross-lingual-code-specialist-seed2026

Finetuned
(215)
this model

Collection including mechramc/kalavai-cross-lingual-code-specialist-seed2026

Paper for mechramc/kalavai-cross-lingual-code-specialist-seed2026