pawn-small / README.md
thomas-schweich's picture
Upload README.md with huggingface_hub
1daec5b verified
|
raw
history blame
11.5 kB
metadata
library_name: pawn
license: apache-2.0
tags:
  - chess
  - transformer
  - world-model
  - causal-lm
  - next-token-prediction
  - representation-learning
  - pytorch
  - rust
model_name: PAWN-Small
pipeline_tag: other
citation: |
  @software{schweich2026pawn,
    author = {Schweich, Thomas},
    title = {{PAWN}: Playstyle-Agnostic World-model Network for Chess},
    year = {2026},
    url = {https://github.com/thomas-schweich/PAWN},
    license = {Apache-2.0}
  }
model_params: 8936960
d_model: 256
n_layers: 8
n_heads: 4
d_ff: 1024
context_length: 512
vocab_size: 1980
datasets:
  - random-chess-games
language:
  - en
metrics:
  - accuracy
model-index:
  - name: PAWN-Small
    results:
      - task:
          type: next-token-prediction
          name: Chess Move Prediction (Random Games)
        metrics:
          - name: Game Completion Rate
            type: accuracy
            value: 0.523438
          - name: Legal Move Rate
            type: accuracy
            value: 0.997451
          - name: Top-1 Accuracy
            type: accuracy
            value: 0.0854
          - name: Top-5 Accuracy
            type: accuracy
            value: 0.3545
          - name: Val Loss
            type: loss
            value: 2.9026
          - name: Total Training Sequences
            type: other
            value: 51200000

PAWN-Small

PAWN (Playstyle-Agnostic World-model Network for Chess) is a causal transformer trained on random chess games. It learns legal moves, board state representations, and game dynamics purely from uniformly random legal move sequences -- no strategic play, no hand-crafted features, no external game databases.

This is the small variant (~8.9M parameters). PAWN is designed as a frozen backbone for parameter-efficient finetuning into player models with arbitrary playstyles.

GitHub Repository -- full source code, training scripts, adapter implementations, and documentation.

All Variants

Variant Parameters Link
PAWN-Small ~9M thomas-schweich/pawn-small
PAWN (Base) ~35M thomas-schweich/pawn-base
PAWN-Large ~67M thomas-schweich/pawn-large

A previous generation of PAWN backbones (pawn-{small,base,large}-legacy) used a 4,278-token coordinate vocabulary, a 256-token context window, and outcome conditioning. They are still available on HuggingFace; see docs/LEGACY.md for the full story.

Headline Metrics

These come from the published model.safetensors (step 195,000 out of 200,000 — the best 5,000-step-cadence checkpoint by val loss), measured on a fresh validation set of random games.

Metric Value
Game completion rate 52.34%
Per-move legal rate 99.7451%
Late-game legal rate 99.9223%
Top-1 accuracy 8.54%
Top-5 accuracy 35.45%
Val loss 2.903
Val perplexity 18.22

Game completion rate is the share of validation games in which every prediction along one side's plies was a legal move. The measurement is non-autoregressive: at each ply the model is shown the true ground-truth history and asked for that side's next move, and an illegal prediction at any ply forfeits the game. Errors do not corrupt subsequent positions — each prediction is independent given the true history. Autoregressive game completion has not been measured for these checkpoints and could be higher or lower; see the game completion section of the architecture doc for the full definition. Game completion rate is a much stricter metric than per-move legal rate, and is the main signal that separates capacity between sizes.

Compound-legality detail Value
Average plies completed per game 234
Average % of game completed 70.12%
Median forfeit ply (when forfeit) 111

Accuracy ceiling

PAWN is trained on uniformly random chess games. At each position with N legal moves, the next move is drawn uniformly, so the Bayes-optimal predictor that does not know the game's outcome can do no better than 1/N at that position. Averaged over the position distribution induced by random games of up to 512 plies, the top-1 ceiling is E[1/N_legal] ≈ 8.43% (95% CI [8.41%, 8.45%], computed over 50,000 fresh random games — see docs/ACCURACY_CEILING.md).

This model's top-1 accuracy of 8.54% is 101% of that ceiling — i.e., essentially at the limit of what any predictor can achieve on this task without outcome information.

Probe Results

Linear probes trained on frozen hidden states measure how well the model's internal representations encode board-level features. The model is never explicitly told about pieces, sides, or rules — these representations emerge purely from next-token prediction on random games.

Probe Metric Description
Piece type 88.8% Per-square piece type (13 classes x 64 squares)
Side to move 100.0% Whose turn it is
Is check 95.2% Whether the side to move is in check
Castling rights 98.3% KQkq castling availability
En passant square 99.9% En passant target square (64 + none)
Material count R² 0.80 (MAE 4.3) Piece counts per type per color
Legal move count R² 0.60 (MAE 5.9) Number of legal moves available
Halfmove clock R² 0.48 (MAE 10.2) Plies since last capture or pawn move
Game phase 94.8% Opening / middlegame / endgame

Diagnostic Results

Edge-case diagnostics measure the model's legal move rate in specific tactical situations.

Category Positions Legal Rate
In check 10000 90.9%
Double check 10000 79.3%
Pin restricts movement 10000 89.8%
En passant available 10000 97.6%
Castling legal (kingside) 10000 98.9%
Castling legal (queenside) 10000 98.6%
Castling blocked by check 10000 96.9%
Promotion available 10000 97.6%
Checkmate (terminal) 10000 47.1%
Stalemate (terminal) 10000 57.0%

Architecture

Parameter Value
Architecture Decoder-only transformer
d_model 256
Layers 8
Attention heads 4
Head dimension 64
d_ff 1024
Parameters ~8.9M
Vocabulary 1,980 tokens (1,968 searchless_chess actions + 1 PAD + 11 outcome tokens)
Context length 512 tokens
Normalization Pre-norm RMSNorm
FFN SwiGLU (4x expansion)
Positional encoding Rotary (RoPE, base 10000)
Embeddings Factored (src + dst + promo)
Dropout 0.0

Training Details

Parameter Value
Training data On-the-fly uniformly random legal games (no external dataset)
Objective Next-token cross-entropy (non-padding positions only)
Outcome conditioning Disabled (prepend_outcome=False) — pure moves, no outcome leakage
Total steps 200,000
Batch size 256
Total training sequences 51,200,000 (= total steps × batch size; the published checkpoint is the best 5K-cadence step by val loss, at step 195,000 ≈ 49,920,000 sequences)
Max ply per example 512
Learning rate 0.0003 (cosine decay with 10,000-step warmup)
Optimizer AdamW (weight decay 0.01)
Precision Mixed (AMP)

Usage

Loading the model

import torch
from safetensors.torch import load_file
from pawn.config import CLMConfig
from pawn.model import PAWNCLM

cfg = CLMConfig.small()
model = PAWNCLM(cfg).cuda().eval()
weights = load_file("model.safetensors", device="cuda")
model.load_state_dict(weights)

Or load directly from HuggingFace:

from pawn.checkpoint import load_backbone_weights
from pawn.config import CLMConfig
from pawn.model import PAWNCLM

weights, config = load_backbone_weights("thomas-schweich/pawn-small")
cfg = CLMConfig.small()
model = PAWNCLM(cfg).eval()
model.load_state_dict(weights)

Finetuning with an adapter

uv run python scripts/train.py --run-type adapter --strategy bottleneck \
    --checkpoint thomas-schweich/pawn-small \
    --pgn thomas-schweich/pawn-lichess-full \
    --bottleneck-dim 32 --lr 1e-4 --local-checkpoints

Acknowledgments

PAWN builds on ideas and tools from the following projects and publications:

Component Reference
Transformer Vaswani et al., "Attention Is All You Need", NeurIPS 2017
RMSNorm Zhang & Sennrich, "Root Mean Square Layer Normalization", NeurIPS 2019
RoPE Su et al., "RoFormer: Enhanced Transformer with Rotary Position Embedding", 2021
SwiGLU Shazeer, "GLU Variants Improve Transformer", 2020
AdamW Loshchilov & Hutter, "Decoupled Weight Decay Regularization", ICLR 2019
Cosine schedule Loshchilov & Hutter, "SGDR: Stochastic Gradient Descent with Warm Restarts", ICLR 2017
Mixed precision Micikevicius et al., "Mixed Precision Training", ICLR 2018
Bottleneck adapters Houlsby et al., "Parameter-Efficient Transfer Learning for NLP", ICML 2019
LoRA Hu et al., "LoRA: Low-Rank Adaptation of Large Language Models", ICLR 2022
FiLM Perez et al., "FiLM: Visual Reasoning with a General Conditioning Layer", AAAI 2018
RoSA Nikdan et al., "RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation", 2024
Linear probes Alain & Bengio, "Understanding Intermediate Layers Using Linear Classifier Probes", ICLR Workshop 2017
Searchless Chess (action vocab) Ruoss et al., "Amortized Planning with Large-Scale Transformers: A Case Study on Chess", 2024
MAIA McIlroy-Young et al., "Aligning Superhuman AI with Human Behavior: Chess as a Model System", KDD 2020
AlphaZero Silver et al., "A General Reinforcement Learning Algorithm that Masters Chess, Shogi, and Go through Self-Play", Science 2018
Leela Chess Zero github.com/LeelaChessZero/lc0
shakmaty github.com/niklasf/shakmaty
PyO3 github.com/PyO3/pyo3
Lichess lichess.org / database.lichess.org

Citation

@software{schweich2026pawn,
  author = {Schweich, Thomas},
  title = {{PAWN}: Playstyle-Agnostic World-model Network for Chess},
  year = {2026},
  url = {https://github.com/thomas-schweich/PAWN},
  license = {Apache-2.0}
}

License

Apache 2.0. See LICENSE.