Transformers documentation
Mixed precision training
Mixed precision training
Full precision (fp32) training stores and computes everything in 32 bits. Mixed precision uses fp16 or bf16 for the compute-intensive forward and backward passes, while keeping an fp32 copy of the weights for the optimizer update. Compute is faster, weight and activation memory are reduced, and training stability is preserved.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β MIXED PRECISION TRAINING LOOP β β β β fp32 master weights ββcastβββΆ fp16/bf16 β β β² β β β β FORWARD (autocast) β β β matmuls in fp16/bf16 β β β reductions stay fp32 β β β β loss β β β LOSS SCALE ΓS ββfp16 β β β β β β β BACKWARD β β β grads in fp16/bf16 β β β β β β β UNSCALE Γ·S ββfp16 β β β check inf/nan β β β cast grads β fp32 β β βββββββββββββββββββββββββββ optimizer.step β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Set bf16() or fp16() to True to enable mixed precision training. Both are 16-bit types, but bf16 has the same exponent range as fp32 so it almost never overflows. Use bf16 on Ampere or newer GPUs (A100, H100) and fall back to fp16 on older hardware like V100 or T4.
Load the model in fp32, otherwise autocast becomes a no-op. Loading in bf16 or fp16 leaves no fp32 master copy for the optimizer to update from.
from transformers import TrainingArguments
args = TrainingArguments(..., bf16=True)
args = TrainingArguments(..., fp16=True)If your model is numerically stable in bf16/fp16, you can skip mixed precision entirely and load and train directly in bf16/fp16. This avoids the fp32 copy of the weights in memory.
tf32
tf32 is a compute mode on Ampere GPUs that uses 10-bit mantissa for matmuls instead of 23-bits. This can give you a speedup, especially when paired with bf16/fp16. PyTorch enables tf32 for matmuls by default on Ampere and newer GPUs, but setting it explicitly in TrainingArguments ensures itβs active regardless of the PyTorch version or environment defaults.
from transformers import TrainingArguments
args = TrainingArguments(..., bf16=True, tf32=True)Next steps
- See the Kernels guide to learn how to speed up training with custom fused kernels.
- See the torch.compile guide to learn how to compile the forward and backward pass for additional throughput.