What is the training precision of Gemma 4? (bf16?)

#13
by hyunmin253 - opened

Hi, I'd like to confirm the training precision of Gemma 4 models.
Were they pre-trained with bfloat16 (bf16) precision?
Could you please clarify this in the model card?
Thank you!

Google org

Hi @hyunmin253
Correct , Gemma 4 uses bf16 for pre-training. You’ll see "dtype": "bfloat16" defined in the config.json, though I agree it should be more prominent in the documentation. I’ve passed this feedback to the team to get the model card updated. Thanks for bringing this to our attention.

Sign up or log in to comment