Transformers documentation
Intel Gaudi
Intel Gaudi
The Intel Gaudi AI accelerator family includes Intel Gaudi 1, Intel Gaudi 2, and Intel Gaudi 3. Each server has 8 Habana Processing Units (HPUs) with 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on first-gen Gaudi. The Gaudi Architecture overview covers the hardware in depth.
TrainingArguments, Trainer, and Pipeline detect Intel Gaudi devices and set the backend to hpu automatically.
Environment variables
HPU lazy mode isn’t compatible with all Transformers modeling code. Set the environment variable below to switch to eager mode if there are errors.
export PT_HPU_LAZY_MODE=0You may also need to enable int64 support to avoid casting issues with long integers.
export PT_ENABLE_INT64_SUPPORT=1Mixed precision
All Gaudi generations support bf16 natively.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./outputs",
bf16=True, # supported on all Gaudi generations
)torch.compile
Gaudi supports torch.compile. TrainingArguments automatically sets torch_compile_backend to "hpu_backend" when HPU is detected.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./outputs",
torch_compile=True,
)Distributed training
Multi-HPU training uses HCCL (Habana Collective Communications Library) as the distributed backend. HCCL is the default, but you can also set ddp_backend explicitly.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./outputs",
ddp_backend="hccl",
)Next steps
- See the Gaudi docs for more detailed information about training.
- Try Optimum for Intel Gaudi for Gaudi-optimized model implementations during training and inference.