Loom: Peer-Contributed Compute for Distributed Training of Code Generation Models
Author: Kanishka Gunawardana
What is Loom?
Loom is a peer-to-peer distributed training system where users contribute idle GPU compute to collectively train a coding AI model (Pulse) at near-zero cost. Contributors get up to 90% discount on model access.
Key Idea
Traditional AI companies spend over 100 million dollars renting GPUs to train models. Loom uses community-donated GPU cycles instead, making training essentially free.
How It Works
- Users install a CLI tool: pip install loom
- The tool donates idle GPU cycles in the background
- Gradients are sent to a central coordinator
- The coordinator aggregates gradients and updates the model
- Contributors get discounted access to the improved model
Pricing
Everyone pays. Peers get a discount based on their GPU contribution:
| User Type | Price per 1M Tokens |
|---|---|
| Regular user | $5.00 |
| Active peer (RTX 3060, 4hrs/day) | $0.50 (90% off) |
| Light peer (GTX 1060, 1hr/day) | $4.33 |
| Enterprise | $8-10 |
Compare: Claude Opus $15/M, GPT-4 $15/M tokens.
Model Evolution: Pulse
| Phase | Model | Peers | Target |
|---|---|---|---|
| 1 | Pulse 7B (StarCoder2-7B) | 10-50 | 40% HumanEval |
| 2 | Pulse 16B (DeepSeek-Coder-V2) | 500-5K | 60% HumanEval |
| 3 | Pulse 33B (custom) | 5K-50K | 75% HumanEval |
| 4 | Pulse 200B MoE (frontier) | 100K-1M | 90% HumanEval |
Security
- Multi-Krum Byzantine gradient filtering (tolerates 30% malicious peers)
- Median-based gradient magnitude validation
- Probabilistic re-verification (5% random re-compute)
- Rolling reputation system
Paper
Full 41-page research paper available in this repository (loom_paper.pdf).
GitHub
https://github.com/kanishka089/loom
Contact
Kanishka Gunawardana - kanishka.gunawardana@gmail.com
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support