| <!--Copyright 2022 The HuggingFace Team. All rights reserved. |
| |
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| the License. You may obtain a copy of the License at |
| |
| http://www.apache.org/licenses/LICENSE-2.0 |
| |
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| specific language governing permissions and limitations under the License. |
| --> |
|
|
| |
|
|
| Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for. |
| For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate |
| and expect your results to line up. |
|
|
| But why? |
|
|
| There's three reasons for this that this tutorial will cover: |
|
|
| 1. **Setting the right seeds** |
| 2. **Observed Batch Sizes** |
| 3. **Learning Rates** |
|
|
| |
|
|
| While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducable: |
|
|
| ```python |
| from accelerate import set_seed |
|
|
| set_seed(42) |
| ``` |
|
|
| Why is this important? Under the hood this will set **5** different seed settings: |
|
|
| ```python |
| random.seed(seed) |
| np.random.seed(seed) |
| torch.manual_seed(seed) |
| torch.cuda.manual_seed_all(seed) |
| |
| if is_tpu_available(): |
| xm.set_rng_state(seed) |
| ``` |
|
|
| The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state. |
|
|
| |
|
|
| When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is |
| a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for, |
| as well as similarly for TPUs. |
|
|
| The below table can be used as a quick reference to try out different batch sizes: |
|
|
| <Tip> |
|
|
| In this example there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers |
|
|
| </Tip> |
|
|
| | Single GPU Batch Size | Multi-GPU Equivalent Batch Size | TPU Equivalent Batch Size | |
| |-----------------------|---------------------------------|---------------------------| |
| | 256 | 128 | 32 | |
| | 128 | 64 | 16 | |
| | 64 | 32 | 8 | |
| | 32 | 16 | 4 | |
|
|
| |
|
|
| As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/tlt-mi_archive/clara-train-sdk-v2.0/nvmidl/appendix/training_with_multiple_gpus.html)], the learning rate should be scaled *linearly* based on the number of devices present. The below |
| snippet shows doing so with Accelerate: |
|
|
| <Tip> |
|
|
| Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their |
| learning rate or not. |
| |
| </Tip> |
|
|
| ```python |
| learning_rate = 1e-3 |
| accelerator = Accelerator() |
| learning_rate *= accelerator.num_processes |
|
|
| optimizer = AdamW(params=model.parameters(), lr=learning_rate) |
| ``` |
|
|
|
|