Transformers documentation

Mixed precision training

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.5.4).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Mixed precision training

Full precision (fp32) training stores and computes everything in 32 bits. Mixed precision uses fp16 or bf16 for the compute-intensive forward and backward passes, while keeping an fp32 copy of the weights for the optimizer update. Compute is faster, weight and activation memory are reduced, and training stability is preserved.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           MIXED PRECISION TRAINING LOOP             β”‚
β”‚                                                     β”‚
β”‚  fp32 master weights ──cast──▢ fp16/bf16            β”‚
β”‚         β–²                          β”‚                β”‚
β”‚         β”‚                    FORWARD (autocast)     β”‚
β”‚         β”‚                    matmuls in fp16/bf16   β”‚
β”‚         β”‚                    reductions stay fp32   β”‚
β”‚         β”‚                          β”‚ loss           β”‚
β”‚         β”‚                    LOSS SCALE Γ—S  ──fp16  β”‚
β”‚         β”‚                          β”‚                β”‚
β”‚         β”‚                    BACKWARD               β”‚
β”‚         β”‚                    grads in fp16/bf16     β”‚
β”‚         β”‚                          β”‚                β”‚
β”‚         β”‚                    UNSCALE Γ·S    ──fp16   β”‚
β”‚         β”‚                    check inf/nan          β”‚
β”‚         β”‚                    cast grads β†’ fp32      β”‚
β”‚         └────────────────────────── optimizer.step  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Set bf16() or fp16() to True to enable mixed precision training. Both are 16-bit types, but bf16 has the same exponent range as fp32 so it almost never overflows. Use bf16 on Ampere or newer GPUs (A100, H100) and fall back to fp16 on older hardware like V100 or T4.

Load the model in fp32, otherwise autocast becomes a no-op. Loading in bf16 or fp16 leaves no fp32 master copy for the optimizer to update from.

from transformers import TrainingArguments

args = TrainingArguments(..., bf16=True)
args = TrainingArguments(..., fp16=True)

If your model is numerically stable in bf16/fp16, you can skip mixed precision entirely and load and train directly in bf16/fp16. This avoids the fp32 copy of the weights in memory.

tf32

tf32 is a compute mode on Ampere GPUs that uses 10-bit mantissa for matmuls instead of 23-bits. This can give you a speedup, especially when paired with bf16/fp16. PyTorch enables tf32 for matmuls by default on Ampere and newer GPUs, but setting it explicitly in TrainingArguments ensures it’s active regardless of the PyTorch version or environment defaults.

from transformers import TrainingArguments

args = TrainingArguments(..., bf16=True, tf32=True)

Next steps

  • See the Kernels guide to learn how to speed up training with custom fused kernels.
  • See the torch.compile guide to learn how to compile the forward and backward pass for additional throughput.
Update on GitHub