convergent-llama-300M-muon-window_4
A 300M-parameter language model trained from scratch on FineWeb-Edu 10BT (~9.4B tokens, 1 epoch) as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.
Model details
| Architecture | LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA) |
| Parameters | ~300M |
| Optimizer | Muon (for 2D weights) + AdamW (for embeddings/bias/norm) |
| Data perturbation | window-4 context (4-gram-level context only) |
| Training data | FineWeb-Edu sample-10BT (~9.4B tokens) |
| Context length | 1024 |
| Tokenizer | Llama 3 (128K vocab) |
| Batch size | 512 sequences |
Training dynamics
Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ..., tokens-9.6B.
from transformers import AutoModelForCausalLM
# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-window_4")
# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-window_4", revision="tokens-1B")
Citation
Paper forthcoming.
- Downloads last month
- 343