Chess model submitted to the LLM Course Chess Challenge.
Submission Info
- Submitted by: janisaiad
- Parameters: 43,104
- Organization: LLM-course
Model Details
- Architecture: Tiny Recursive Model (TRM) - looping recurrent transformer (cycle-shared weights)
- Vocab size: 148
- Embedding dim: 48
- Layers: 1
- Heads: 2
- Cycles: 4
TRM note: this is a looping TRM model — at inference/training time we run the same transformer stack for 4 recurrent refinement cycle(s) (weights are shared across cycles), which increases compute/reasoning depth without increasing parameter count.
ELO Fine-Tuning
This model has been fine-tuned on Lichess games filtered by ELO rating (1200-1400) to improve chess playing strength and win rate. The fine-tuning focuses on learning from games played at intermediate skill levels, optimizing the model to make stronger moves and win more games.
Fine-tuning details:
- Dataset: Lichess games filtered by ELO (1200-1400 range)
- Objective: Maximize win rate and chess playing strength
- Training approach: Supervised learning on high-quality game sequences
- Downloads last month
- 3