text stringlengths 0 1.44k |
|---|
## LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS |
**Edward Hu\*** **Yelong Shen\*** **Phillip Wallis** **Zeyuan Allen-Zhu** **Lu Wang** **Weizhu Chen** **Yuanzhi Li** **Shean Wang** |
Microsoft Corporation |
{edwardhu, yeshe, phwallis, zeyuana, yuanzhil, swang, luw, wzchen}@microsoft.com |
yuanzhil@andrew.cmu.edu |
(Version 2) |
**ABSTRACT** |
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example – deploying independ... |
**1 INTRODUCTION** |
Many applications in natural language processing rely on adapting one large-scale, pre-trained language model to multiple downstream applications. Such adaptation is usually done via fine-tuning, which updates all the parameters of the pre-trained model. The major downside of fine-tuning is that the new model contains ... |
Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks. This way, we only need to store and load a small number of task-specific parameters in addition to the pre-trained model for each task, greatly boosting the operational efficiency when deployed. However, existing t... |
We take inspiration from Li et al. (2018a); Aghajanyan et al. (2020) which show that the learned over-parametrized models in fact reside on a low intrinsic dimension. We hypothesize that the change in weights during model adaptation also has a low “intrinsic rank”, leading to our proposed Low-Rank Adaptation (LoRA) app... |
<div align="center"> |
<img src="lora_figure1.png" width="300"/> |
<p>Figure 1: Our reparametrization. We only train A and B.</p> |
</div> |
LORA possesses several key advantages. |
* A pre-trained model can be shared and used to build many small LoRA modules for different tasks. We can freeze the shared model and efficiently switch tasks by replacing the matrices A and B in Figure 1, reducing the storage requirement and task-switching overhead significantly. |
* LORA makes training more efficient and lowers the hardware barrier to entry by up to 3 times when using adaptive optimizers since we do not need to calculate the gradients or maintain the optimizer states for most parameters. Instead, we only optimize the injected, much smaller low-rank matrices. |
* Our simple linear design allows us to merge the trainable matrices with the frozen weights when deployed, introducing no inference latency compared to a fully fine-tuned model, by construction. |
* LORA is orthogonal to many prior methods and can be combined with many of them, such as prefix-tuning. We provide an example in Appendix E. |
**Terminologies and Conventions** We make frequent references to the Transformer architecture and use the conventional terminologies for its dimensions. We call the input and output dimension size of a Transformer layer *d<sub>model</sub>*. We use W<sub>q</sub>, W<sub>k</sub>, W<sub>v</sub>, and W<sub>o</sub> to refer ... |
**2 PROBLEM STATEMENT** |
While our proposal is agnostic to training objective, we focus on language modeling as our motivating use case. Below is a brief description of the language modeling problem and, in particular, the maximization of conditional probabilities given a task-specific prompt. |
Suppose we are given a pre-trained autoregressive language model *P<sub>Φ</sub>(y|x)* parametrized by Φ. For instance, *P<sub>Φ</sub>(y|x)* can be a generic multi-task learner such as GPT (Radford et al., b; Brown et al., 2020) based on the Transformer architecture (Vaswani et al., 2017). Consider adapting this pre-tra... |
During full fine-tuning, the model is initialized to pre-trained weights Φ<sub>0</sub> and updated to Φ<sub>0</sub> + ΔΦ by repeatedly following the gradient to maximize the conditional language modeling objective: |
<div align="center"> |
<img src="lora_equation1.png" width="300"/> |
<p>Equation 1</p> |
</div> |
One of the main drawbacks for full fine-tuning is that for each downstream task, we learn a different set of parameters ΔΦ whose dimension |ΔΦ| equals |Φ<sub>0</sub>|. Thus, if the pre-trained model is large (such as GPT-3 with |Φ<sub>0</sub>| ≈ 175 Billion), storing and deploying many independent instances of fine-tun... |
In this paper, we adopt a more parameter-efficient approach, where the task-specific parameter increment ΔΦ = ΔΦ(Θ) is further encoded by a much smaller-sized set of parameters Θ with |Θ| « |Φ<sub>0</sub>|. The task of finding ΔΦ thus becomes optimizing over Θ: |
<div align="center"> |
<img src="lora_equation2.png" width="300"/> |
<p>Equation 2</p> |
</div> |
In the subsequent sections, we propose to use a low-rank representation to encode ΔΦ that is both compute- and memory-efficient. When the pre-trained model is GPT-3 175B, the number of trainable parameters |Θ| can be as small as 0.01% of |Φ<sub>0</sub>|. |
**3 AREN'T EXISTING SOLUTIONS GOOD ENOUGH?** |
The problem we set out to tackle is by no means new. Since the inception of transfer learning, dozens of works have sought to make model adaptation more parameter- and compute-efficient. See Section 6 for a survey of some of the well-known works. Using language modeling as an example, there are two prominent strategies... |
**Adapter Layers Introduce Inference Latency** There are many variants of adapters. We focus on the original design by Houlsby et al. (2019) which has two adapter layers per Transformer block and a more recent one by Lin et al. (2020) which has only one per block but with an additional LayerNorm (Ba et al., 2016). Whil... |
<div align="center"> |
<img src="lora_table1.png" width="500"/> |
<p>Table 1: Infernece latency of a single forward pass in GPT-2 medium measured in milliseconds, averaged over 100 trials. We use an NVIDIA Quadro RTX8000. “|Θ|” denotes the number of trainable parameters in adapter layers. Adapter<sup>H</sup> and Adapter<sup>LN</sup> are two variants of adapter tuning, which we desc... |
</div> |
This problem gets worse when we need to shard the model as done in Shoeybi et al. (2020); Lepikhin et al. (2020), because the additional depth requires more synchronous GPU operations such as AllReduce and Broadcast, unless we store the adapter parameters redundantly many times. |
**Directly Optimizing the Prompt is Hard** The other direction, as exemplified by prefix tuning (Li & Liang, 2021), faces a different challenge. We observe that prefix tuning is difficult to optimize and that its performance changes non-monotonically in trainable parameters, confirming similar observations in the origi... |
**4 OUR METHOD** |
We describe the simple design of LoRA and its practical benefits. The principles outlined here apply to any dense layers in deep learning models, though we only focus on certain weights in Transformer language models in our experiments as the motivating use case. |
**4.1 LOW-RANK-PARAMETRIZED UPDATE MATRICES** |
A neural network contains many dense layers which perform matrix multiplication. The weight matrices in these layers typically have full-rank. When adapting to a specific task, Aghajanyan et al. (2020) shows that the pre-trained language models have a low “instrisic dimension" and can still learn efficiently despite a ... |
<div align="center"> |
<img src="lora_equation3.png" width="350"/> |
<p>Equation 3</p> |
</div> |
We illustrate our reparametrization in Figure 1. We use a random Gaussian initialization for A and zero for B, so ΔW = BA is zero at the beginning of training. We then scale ΔWx by , where *α* is a constant in *r*. When optimizing with Adam, tuning *α* is roughly the same as tuning the learning rate if we scale the in... |
**A Generalization of Full Fine-tuning.** A more general form of fine-tuning allows the training of a subset of the pre-trained parameters. LoRA takes a step further and does not require the accumulated gradient update to weight matrices to have full-rank during adaptation. This means that when applying LoRA to all wei... |
**No Additional Inference Latency.** When deployed in production, we can explicitly compute and store W = W<sub>o</sub> + BA and perform inference as usual. Note that both W<sub>o</sub> and BA are in R<sup>*d*×*k*</sup>. When we need to switch to another downstream task, we can recover W<sub>o</sub> by subtracting BA a... |
**4.2 APPLYING LORA TO TRANSFORMER** |
In principle, we can apply LoRA to any subset of weight matrices in a neural network to reduce the number of trainable parameters. In the Transformer architecture, there are four weight matrices in the self-attention module (W<sub>q</sub>, W<sub>k</sub>, W<sub>v</sub>, W<sub>o</sub>) and two in the MLP module. We treat... |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 104