Transformers documentation

Intel Gaudi

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.5.4).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Intel Gaudi

The Intel Gaudi AI accelerator family includes Intel Gaudi 1, Intel Gaudi 2, and Intel Gaudi 3. Each server has 8 Habana Processing Units (HPUs) with 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on first-gen Gaudi. The Gaudi Architecture overview covers the hardware in depth.

TrainingArguments, Trainer, and Pipeline detect Intel Gaudi devices and set the backend to hpu automatically.

Environment variables

HPU lazy mode isn’t compatible with all Transformers modeling code. Set the environment variable below to switch to eager mode if there are errors.

export PT_HPU_LAZY_MODE=0

You may also need to enable int64 support to avoid casting issues with long integers.

export PT_ENABLE_INT64_SUPPORT=1

Mixed precision

All Gaudi generations support bf16 natively.

from transformers import TrainingArguments

training_args = TrainingArguments(
    output_dir="./outputs",
    bf16=True,  # supported on all Gaudi generations
)

torch.compile

Gaudi supports torch.compile. TrainingArguments automatically sets torch_compile_backend to "hpu_backend" when HPU is detected.

from transformers import TrainingArguments

training_args = TrainingArguments(
    output_dir="./outputs",
    torch_compile=True,
)

Distributed training

Multi-HPU training uses HCCL (Habana Collective Communications Library) as the distributed backend. HCCL is the default, but you can also set ddp_backend explicitly.

from transformers import TrainingArguments

training_args = TrainingArguments(
    output_dir="./outputs",
    ddp_backend="hccl",
)

Next steps

  • See the Gaudi docs for more detailed information about training.
  • Try Optimum for Intel Gaudi for Gaudi-optimized model implementations during training and inference.
Update on GitHub