llama-window-2-old / README.md
deqing's picture
Add model card
4b40958 verified
metadata
library_name: transformers
tags:
  - convergent-evolution
  - fourier-features
  - number-embeddings
license: mit
datasets:
  - HuggingFaceFW/fineweb-edu

convergent-llama-300M-muon-window_2

A 300M-parameter language model trained from scratch on FineWeb-Edu 10BT (~9.4B tokens, 1 epoch) as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.

Model details

Architecture LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA)
Parameters ~300M
Optimizer Muon (for 2D weights) + AdamW (for embeddings/bias/norm)
Data perturbation window-2 context (bigram-level context only)
Training data FineWeb-Edu sample-10BT (~9.4B tokens)
Context length 1024
Tokenizer Llama 3 (128K vocab)
Batch size 512 sequences

Training dynamics

Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ..., tokens-9.6B.

from transformers import AutoModelForCausalLM

# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-window_2")

# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-window_2", revision="tokens-1B")

Citation

Paper forthcoming.