Transformers documentation
Building a GPU workstation
Building a GPU workstation
The GPU is one of the most important choices when building a deep learning machine. Tensor cores handle matrix multiplication efficiently, and high memory bandwidth keeps data flowing. Training large models requires a more powerful GPU, multiple GPUs, or offloading techniques that move work to the CPU or NVMe.
The tips below cover practical GPU setup for deep learning.
Power
High-end consumer GPUs may have two or three PCIe 8-pin power sockets. Connect a separate 12V PCIe 8-pin cable to each socket. Don’t use a pigtail cable (a single cable with two splits at one end) to connect two sockets, otherwise, you won’t get full performance from the GPU.
Connect each PCIe 8-pin power cable to a 12V rail on the power supply unit (PSU). Each cable delivers up to 150W. Some GPUs use a PCIe 12-pin connector that delivers up to 500-600W. Lower-end GPUs may use a PCIe 6-pin connector that supplies up to 75W.
A PSU must maintain stable voltage because unstable voltage can starve the GPU of power during peak usage.
Cooling
An overheated GPU throttles performance and shuts down to prevent damage. Keep temperatures between 158–167°F (70–75 Celsius) for full performance and a longer lifespan. Above 183–194°F (84–90 Celsius), the GPU usually starts throttling.
Multi-GPU connectivity
How your GPUs connect matters for multi-GPU setups. NVLink connections are faster than PCIe bridges, but the impact depends on your parallelism strategy. DDP has less GPU-to-GPU communication than ZeRO, so connection speed matters less.
Run the command below to check how your GPUs are connected.
nvidia-smi topo -m
NVLink is NVIDIA’s high-speed communication system for connecting multiple GPUs.
GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A
NV2 indicates GPU0 and GPU1 are connected by 2 NVLinks.
Next steps
- See the Which GPU(s) to Get for Deep Learning blog post for a deeper comparison of GPUs.