KALAVAI — Fiction Specialist (pythia-410m, seed 42)

Fine-tuned EleutherAI/pythia-410m on Fiction data as part of the KALAVAI decentralized cooperative training protocol.

Paper results

Phase 1 English domains. MoE fusion: +7.72% ±0.02pp over best specialist (3 seeds). Mean divergence 15.65%.

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("mechramc/kalavai-phase1-410m-fiction-specialist-seed42")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-410m")

This model is one specialist in a KALAVAI cooperative. To reproduce the MoE fusion results from the paper, load multiple domain specialists and combine them with a trained MoE router (see the paper and GitHub for details).

Citation

@article{kalavai2026,
  title={KALAVAI: Cooperative Decentralized LLM Training via MoE Fusion},
  author={[Authors]},
  journal={arXiv preprint arXiv:2603.22755},
  year={2026}
}
Downloads last month
24
Safetensors
Model size
0.4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mechramc/kalavai-phase1-410m-fiction-specialist-seed42

Finetuned
(215)
this model

Collection including mechramc/kalavai-phase1-410m-fiction-specialist-seed42

Paper for mechramc/kalavai-phase1-410m-fiction-specialist-seed42