convergent-llama-300M-muon-isolate

A 300M-parameter language model trained from scratch on FineWeb-Edu 10BT (~9.4B tokens, 1 epoch) as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.

Model details

Architecture LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA)
Parameters ~300M
Optimizer Muon (for 2D weights) + AdamW (for embeddings/bias/norm)
Data perturbation block-diagonal attention mask (numbers cannot attend to context)
Training data FineWeb-Edu sample-10BT (~9.4B tokens)
Context length 1024
Tokenizer Llama 3 (128K vocab)
Batch size 512 sequences

Training dynamics

Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ..., tokens-9.6B.

from transformers import AutoModelForCausalLM

# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-isolate")

# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-isolate", revision="tokens-1B")

Citation

Paper forthcoming.

Downloads last month
309
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train deqing/llama-isolate-old