OLMo-1B-as_fm3_tg_omi1_omi2
Collection
OLMo 1B model pretrained with Algebraic Stack, FineMath3, TinyGSM, OMI1, and OMI2. Includes checkpoints from doing PPO using GSM8K train. • 25 items • Updated • 2
1B OLMo model pretrained on Algebraic Stack, FineMath3+, TinyGSM, OpenMathInstruct1, and OpenMathInstruct2.
Model names contain the contents of the pretraining dataset, delimited by underscores.
If there is a {n}x in front of the dataset abbreviation, that means it was repeated n times during pretraining. For instance, 2xtg refers to two passes over the TinyGSM dataset.