PAWN-Large
PAWN (Playstyle-Agnostic World-model Network for Chess) is a causal transformer trained on random chess games. It learns legal moves, board state representations, and game dynamics purely from uniformly random legal move sequences -- no strategic play, no hand-crafted features, no external game databases.
This is the large variant (~66.9M parameters). PAWN is designed as a frozen backbone for parameter-efficient finetuning into player models with arbitrary playstyles.
GitHub Repository -- full source code, training scripts, adapter implementations, and documentation.
All Variants
| Variant | Parameters | Link |
|---|---|---|
| PAWN-Small | ~9M | thomas-schweich/pawn-small |
| PAWN (Base) | ~35M | thomas-schweich/pawn-base |
| PAWN-Large | ~67M | thomas-schweich/pawn-large |
A previous generation of PAWN backbones (pawn-{small,base,large}-legacy) used a 4,278-token coordinate vocabulary, a 256-token context window, and outcome conditioning. They are still available on HuggingFace; see docs/LEGACY.md for the full story.
Headline Metrics
These come from the published model.safetensors (step 195,000 out of 200,000 — the best 5,000-step-cadence checkpoint by val loss), measured on a fresh validation set of random games.
| Metric | Value |
|---|---|
| Game completion rate | 99.76% |
| Per-move legal rate | 99.9990% |
| Late-game legal rate | 99.9996% |
| Top-1 accuracy | 8.63% |
| Top-5 accuracy | 35.56% |
| Val loss | 2.865 |
| Val perplexity | 17.55 |
Game completion rate is the share of validation games in which every prediction along one side's plies was a legal move. The measurement is non-autoregressive: at each ply the model is shown the true ground-truth history and asked for that side's next move, and an illegal prediction at any ply forfeits the game. Errors do not corrupt subsequent positions — each prediction is independent given the true history. Autoregressive game completion has not been measured for these checkpoints and could be higher or lower; see the game completion section of the architecture doc for the full definition. Game completion rate is a much stricter metric than per-move legal rate, and is the main signal that separates capacity between sizes.
| Compound-legality detail | Value |
|---|---|
| Average plies completed per game | 349 |
| Average % of game completed | 99.88% |
| Median forfeit ply (when forfeit) | 153 |
Accuracy ceiling
PAWN is trained on uniformly random chess games. At each position with N legal moves, the next move is drawn uniformly, so the Bayes-optimal predictor that does not know the game's outcome can do no better than 1/N at that position. Averaged over the position distribution induced by random games of up to 512 plies, the top-1 ceiling is E[1/N_legal] ≈ 8.43% (95% CI [8.41%, 8.45%], computed over 50,000 fresh random games — see docs/ACCURACY_CEILING.md).
This model's top-1 accuracy of 8.63% is 102% of that ceiling — i.e., essentially at the limit of what any predictor can achieve on this task without outcome information.
Probe Results
Linear probes trained on frozen hidden states measure how well the model's internal representations encode board-level features. The model is never explicitly told about pieces, sides, or rules — these representations emerge purely from next-token prediction on random games.
| Probe | Accuracy | Description |
|---|---|---|
| Piece type | 90.3% | Per-square piece type (13 classes x 64 squares) |
| Side to move | 100.0% | Whose turn it is |
| Is check | 93.9% | Whether the side to move is in check |
| Castling rights | 96.8% | KQkq castling availability |
| En passant square | 99.7% | En passant target square (64 + none) |
| Material count | 86.9% (MAE 5.1) | Piece counts per type per color |
| Legal move count | 43.9% (MAE 6.5) | Number of legal moves available |
| Halfmove clock | 11.0% (MAE 4.0) | Plies since last capture or pawn move |
| Game phase | 91.3% | Opening / middlegame / endgame |
Diagnostic Results
Edge-case diagnostics measure the model's legal move rate in specific tactical situations.
| Category | Positions | Legal Rate |
|---|---|---|
| In check | 1000 | 98.4% |
| Double check | 71 | 95.0% |
| Pin restricts movement | 1000 | 97.9% |
| En passant available | 940 | 99.4% |
| Castling legal (kingside) | 1000 | 99.8% |
| Castling legal (queenside) | 1000 | 99.7% |
| Castling blocked by check | 892 | 99.5% |
| Promotion available | 1000 | 99.6% |
| Checkmate (terminal) | 276 | 92.2% |
| Stalemate (terminal) | 41 | 94.9% |
Architecture
| Parameter | Value |
|---|---|
| Architecture | Decoder-only transformer |
| d_model | 640 |
| Layers | 10 |
| Attention heads | 8 |
| Head dimension | 80 |
| d_ff | 2560 |
| Parameters | ~66.9M |
| Vocabulary | 1,980 tokens (1,968 searchless_chess actions + 1 PAD + 11 outcome tokens) |
| Context length | 512 tokens |
| Normalization | Pre-norm RMSNorm |
| FFN | SwiGLU (4x expansion) |
| Positional encoding | Rotary (RoPE, base 10000) |
| Embeddings | Factored (src + dst + promo) |
| Dropout | 0.0 |
Training Details
| Parameter | Value |
|---|---|
| Training data | On-the-fly uniformly random legal games (no external dataset) |
| Objective | Next-token cross-entropy (non-padding positions only) |
| Outcome conditioning | Disabled (prepend_outcome=False) — pure moves, no outcome leakage |
| Total steps | 200,000 |
| Batch size | 256 |
| Total training sequences | 51,200,000 (= total steps × batch size; the published checkpoint is the best 5K-cadence step by val loss, at step 195,000 ≈ 49,920,000 sequences) |
| Max ply per example | 512 |
| Learning rate | 0.0003 (cosine decay with 10,000-step warmup) |
| Optimizer | AdamW (weight decay 0.01) |
| Precision | Mixed (AMP) |
Usage
Loading the model
import torch
from safetensors.torch import load_file
from pawn.config import CLMConfig
from pawn.model import PAWNCLM
cfg = CLMConfig.large()
model = PAWNCLM(cfg).cuda().eval()
weights = load_file("model.safetensors", device="cuda")
model.load_state_dict(weights)
Or load directly from HuggingFace:
from pawn.checkpoint import load_backbone_weights
from pawn.config import CLMConfig
from pawn.model import PAWNCLM
weights, config = load_backbone_weights("thomas-schweich/pawn-large")
cfg = CLMConfig.large()
model = PAWNCLM(cfg).eval()
model.load_state_dict(weights)
Finetuning with an adapter
uv run python scripts/train.py --run-type adapter --strategy bottleneck \
--checkpoint thomas-schweich/pawn-large \
--pgn thomas-schweich/pawn-lichess-full \
--bottleneck-dim 32 --lr 1e-4 --local-checkpoints
Acknowledgments
PAWN builds on ideas and tools from the following projects and publications:
Citation
@software{schweich2026pawn,
author = {Schweich, Thomas},
title = {{PAWN}: Playstyle-Agnostic World-model Network for Chess},
year = {2026},
url = {https://github.com/thomas-schweich/PAWN},
license = {Apache-2.0}
}
License
Apache 2.0. See LICENSE.
- Downloads last month
- 577
Collection including thomas-schweich/pawn-large
Papers for thomas-schweich/pawn-large
Grandmaster-Level Chess Without Search
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
LoRA: Low-Rank Adaptation of Large Language Models
RoFormer: Enhanced Transformer with Rotary Position Embedding
Aligning Superhuman AI with Human Behavior: Chess as a Model System
Evaluation results
- Game Completion Rateself-reported0.998
- Legal Move Rateself-reported1.000
- Top-1 Accuracyself-reported0.086
- Top-5 Accuracyself-reported0.356
- Val Lossself-reported2.865
- Total Training Sequencesself-reported51200000.000