diff --git a/alignment-papers-text/1412.2306_Deep_Visual-Semantic_Alignments_for_Generating_Ima.md b/alignment-papers-text/1412.2306_Deep_Visual-Semantic_Alignments_for_Generating_Ima.md new file mode 100644 index 0000000000000000000000000000000000000000..77515ae9b16aa4c5ab63e0c71bc5c5cbdf7e9654 --- /dev/null +++ b/alignment-papers-text/1412.2306_Deep_Visual-Semantic_Alignments_for_Generating_Ima.md @@ -0,0 +1,1273 @@ +## **Deep Visual-Semantic Alignments for Generating Image Descriptions** + +Andrej Karpathy Li Fei-Fei +Department of Computer Science, Stanford University + + +_{_ karpathy,feifeili _}_ @cs.stanford.edu + + +**Abstract** + + + +_We present a model that generates natural language de-_ +_scriptions of images and their regions. Our approach lever-_ +_ages datasets of images and their sentence descriptions to_ +_learn about the inter-modal correspondences between lan-_ +_guage and visual data. Our alignment model is based on a_ +_novel combination of Convolutional Neural Networks over_ +_image regions, bidirectional Recurrent Neural Networks_ +_over sentences, and a structured objective that aligns the_ +_two modalities through a multimodal embedding. We then_ +_describe a Multimodal Recurrent Neural Network architec-_ +_ture that uses the inferred alignments to learn to generate_ +_novel descriptions of image regions. We demonstrate that_ +_our alignment model produces state of the art results in re-_ +_trieval experiments on Flickr8K, Flickr30K and MSCOCO_ +_datasets. We then show that the generated descriptions sig-_ +_nificantly outperform retrieval baselines on both full images_ +_and on a new dataset of region-level annotations._ + + +**1. Introduction** + + +A quick glance at an image is sufficient for a human to +point out and describe an immense amount of details about +the visual scene [14]. However, this remarkable ability has +proven to be an elusive task for our visual recognition models. The majority of previous work in visual recognition +has focused on labeling images with a fixed set of visual +categories and great progress has been achieved in these endeavors [45, 11]. However, while closed vocabularies of visual concepts constitute a convenient modeling assumption, +they are vastly restrictive when compared to the enormous +amount of rich descriptions that a human can compose. + + +Some pioneering approaches that address the challenge of +generating image descriptions have been developed [29, +13]. However, these models often rely on hard-coded visual +concepts and sentence templates, which imposes limits on +their variety. Moreover, the focus of these works has been +on reducing complex visual scenes into a single sentence, +which we consider to be an unnecessary restriction. + + +In this work, we strive to take a step towards the goal of + + + +Figure 1. Motivation/Concept Figure: Our model treats language +as a rich label space and generates descriptions of image regions. + + +generating dense descriptions of images (Figure 1). The +primary challenge towards this goal is in the design of a +model that is rich enough to simultaneously reason about +contents of images and their representation in the domain +of natural language. Additionally, the model should be free +of assumptions about specific hard-coded templates, rules +or categories and instead rely on learning from the training +data. The second, practical challenge is that datasets of image captions are available in large quantities on the internet + +[21, 58, 37], but these descriptions multiplex mentions of +several entities whose locations in the images are unknown. + + +Our core insight is that we can leverage these large imagesentence datasets by treating the sentences as weak labels, +in which contiguous segments of words correspond to some +particular, but unknown location in the image. Our approach is to infer these alignments and use them to learn +a generative model of descriptions. Concretely, our contributions are twofold: + + +_•_ We develop a deep neural network model that infers the latent alignment between segments of sentences and the region of the image that they describe. + + +Our model associates the two modalities through a +common, multimodal embedding space and a structured objective. We validate the effectiveness of this +approach on image-sentence retrieval experiments in +which we surpass the state-of-the-art. + + +_•_ We introduce a multimodal Recurrent Neural Network +architecture that takes an input image and generates +its description in text. Our experiments show that the +generated sentences significantly outperform retrievalbased baselines, and produce sensible qualitative predictions. We then train the model on the inferred correspondences and evaluate its performance on a new +dataset of region-level annotations. + + +We make code, data and annotations publicly available. [1] + + +**2. Related Work** + + +**Dense image annotations.** Our work shares the high-level +goal of densely annotating the contents of images with +many works before us. Barnard et al. [2] and Socher et +al. [48] studied the multimodal correspondence between +words and images to annotate segments of images. Several works [34, 18, 15, 33] studied the problem of holistic +scene understanding in which the scene type, objects and +their spatial support in the image is inferred. However, the +focus of these works is on correctly labeling scenes, objects +and regions with a fixed set of categories, while our focus is +on richer and higher-level descriptions of regions. + + +**Generating descriptions.** The task of describing images +with sentences has also been explored. A number of approaches pose the task as a retrieval problem, where the +most compatible annotation in the training set is transferred +to a test image [21, 49, 13, 43, 23], or where training annotations are broken up and stitched together [30, 35, 31]. +Several approaches generate image captions based on fixed +templates that are filled based on the content of the image + +[19, 29, 13, 55, 56, 9, 1] or generative grammars [42, 57], +but this approach limits the variety of possible outputs. +Most closely related to us, Kiros et al. [26] developed a logbilinear model that can generate full sentence descriptions +for images, but their model uses a fixed window context +while our Recurrent Neural Network (RNN) model conditions the probability distribution over the next word in a sentence on all previously generated words. Multiple closely +related preprints appeared on Arxiv during the submission +of this work, some of which also use RNNs to generate image descriptions [38, 54, 8, 25, 12, 5]. Our RNN is simpler +than most of these approaches but also suffers in performance. We quantify this comparison in our experiments. + + +**Grounding natural language in images.** A number of approaches have been developed for grounding text in the vi + +1cs.stanford.edu/people/karpathy/deepimagesent + + + +sual domain [27, 39, 60, 36]. Our approach is inspired by +Frome et al. [16] who associate words and images through +a semantic embedding. More closely related is the work of +Karpathy et al. [24], who decompose images and sentences +into fragments and infer their inter-modal alignment using a +ranking objective. In contrast to their model which is based +on grounding dependency tree relations, our model aligns +contiguous segments of sentences which are more meaningful, interpretable, and not fixed in length. + + +**Neural networks in visual and language domains.** Multiple approaches have been developed for representing images and words in higher-level representations. On the image side, Convolutional Neural Networks (CNNs) [32, 28] +have recently emerged as a powerful class of models for +image classification and object detection [45]. On the sentence side, our work takes advantage of pretrained word +vectors [41, 22, 3] to obtain low-dimensional representations of words. Finally, Recurrent Neural Networks have +been previously used in language modeling [40, 50], but we +additionally condition these models on images. + + +**3. Our Model** + +**Overview** . The ultimate goal of our model is to generate +descriptions of image regions. During training, the input +to our model is a set of images and their corresponding +sentence descriptions (Figure 2). We first present a model +that aligns sentence snippets to the visual regions that they +describe through a multimodal embedding. We then treat +these correspondences as training data for a second, multimodal Recurrent Neural Network model that learns to generate the snippets. + + +**3.1. Learning to align visual and language data** +Our alignment model assumes an input dataset of images +and their sentence descriptions. Our key insight is that sentences written by people make frequent references to some +particular, but unknown location in the image. For example, in Figure 2, the words _“Tabby cat is leaning”_ refer to +the cat, the words _“wooden table”_ refer to the table, etc. +We would like to infer these latent correspondences, with +the eventual goal of later learning to generate these snippets +from image regions. We build on the approach of Karpathy +et al. [24], who learn to ground dependency tree relations +to image regions with a ranking objective. Our contribution is in the use of bidirectional recurrent neural network +to compute word representations in the sentence, dispensing of the need to compute dependency trees and allowing +unbounded interactions of words and their context in the +sentence. We also substantially simplify their objective and +show that both modifications improve ranking performance. + + +We first describe neural networks that map words and image +regions into a common, multimodal embedding. Then we +introduce our novel objective, which learns the embedding + + +Figure 2. Overview of our approach. A dataset of images and their sentence descriptions is the input to our model (left). Our model first +infers the correspondences (middle, Section 3.1) and then learns to generate novel descriptions (right, Section 3.2). + + + +representations so that semantically similar concepts across +the two modalities occupy nearby regions of the space. + + +**3.1.1** **Representing images** + + +Following prior work [29, 24], we observe that sentence descriptions make frequent references to objects and their attributes. Thus, we follow the method of Girshick et al. [17] +to detect objects in every image with a Region Convolutional Neural Network (RCNN). The CNN is pre-trained on +ImageNet [6] and finetuned on the 200 classes of the ImageNet Detection Challenge [45]. Following Karpathy et al. + +[24], we use the top 19 detected locations in addition to the +whole image and compute the representations based on the +pixels _Ib_ inside each bounding box as follows: + + +_v_ = _Wm_ [ _CNNθc_ ( _Ib_ )] + _bm,_ (1) + + +where _CNN_ ( _Ib_ ) transforms the pixels inside bounding box +_Ib_ into 4096-dimensional activations of the fully connected +layer immediately before the classifier. The CNN parameters _θc_ contain approximately 60 million parameters. The +matrix _Wm_ has dimensions _h ×_ 4096, where _h_ is the size +of the multimodal embedding space ( _h_ ranges from 10001600 in our experiments). Every image is thus represented +as a set of _h_ -dimensional vectors _{vi | i_ = 1 _. . ._ 20 _}_ . + + +**3.1.2** **Representing sentences** + + +To establish the inter-modal relationships, we would like +to represent the words in the sentence in the same _h_ dimensional embedding space that the image regions occupy. The simplest approach might be to project every individual word directly into this embedding. However, this +approach does not consider any ordering and word context +information in the sentence. An extension to this idea is +to use word bigrams, or dependency tree relations as previously proposed [24]. However, this still imposes an arbitrary maximum size of the context window and requires +the use of Dependency Tree Parsers that might be trained on +unrelated text corpora. + + +To address these concerns, we propose to use a Bidirectional Recurrent Neural Network (BRNN) [46] to compute +the word representations. The BRNN takes a sequence of + + + +_N_ words (encoded in a 1-of-k representation) and transforms each one into an _h_ -dimensional vector. However, the +representation of each word is enriched by a variably-sized +context around that word. Using the index _t_ = 1 _. . . N_ to +denote the position of a word in a sentence, the precise form +of the BRNN is as follows: + + +_xt_ = _Ww_ I _t_ (2) + +_et_ = _f_ ( _Wext_ + _be_ ) (3) + +_h_ _[f]_ _t_ [=] _[ f]_ [(] _[e][t]_ [+] _[ W][f]_ _[h][f]_ _t−_ 1 [+] _[ b][f]_ [)] (4) + +_h_ _[b]_ _t_ [=] _[ f]_ [(] _[e][t]_ [+] _[ W][b][h][b]_ _t_ +1 [+] _[ b][b]_ [)] (5) + +_st_ = _f_ ( _Wd_ ( _h_ _[f]_ _t_ [+] _[ h]_ _t_ _[b]_ [) +] _[ b][d]_ [)] _[.]_ (6) + + +Here, I _t_ is an indicator column vector that has a single one +at the index of the _t_ -th word in a word vocabulary. The +weights _Ww_ specify a word embedding matrix that we initialize with 300-dimensional word2vec [41] weights and +keep fixed due to overfitting concerns. However, in practice we find little change in final performance when these +vectors are trained, even from random initialization. Note +that the BRNN consists of two independent streams of processing, one moving left to right ( _h_ _[f]_ _t_ [) and the other right to] +left ( _h_ _[b]_ _t_ [) (see Figure][ 3][ for diagram). The final] _[ h]_ [-dimensional] +representation _st_ for the _t_ -th word is a function of both the +word at that location and also its surrounding context in the +sentence. Technically, every _st_ is a function of all words in +the entire sentence, but our empirical finding is that the final +word representations ( _st_ ) align most strongly to the visual +concept of the word at that location ( I _t_ ). + + +We learn the parameters _We, Wf_ _, Wb, Wd_ and the respective biases _be, bf_ _, bb, bd_ . A typical size of the hidden representation in our experiments ranges between 300-600 dimensions. We set the activation function _f_ to the rectified +linear unit (ReLU), which computes _f_ : _x �→_ _max_ (0 _, x_ ). + + +**3.1.3** **Alignment objective** + + +We have described the transformations that map every image and sentence into a set of vectors in a common _h_ dimensional space. Since the supervision is at the level of +entire images and sentences, our strategy is to formulate an + + +image-sentence score as a function of the individual regionword scores. Intuitively, a sentence-image pair should have +a high matching score if its words have a confident support +in the image. The model of Karpathy et a. [24] interprets the +dot product _vi_ _[T]_ _[s][t]_ [ between the] _[ i]_ [-th region and] _[ t]_ [-th word as a] +measure of similarity and use it to define the score between +image _k_ and sentence _l_ as: + + + +_Skl_ = +_t∈gl_ + + + + +- _max_ (0 _, vi_ _[T]_ _[s][t]_ [)] _[.]_ (7) + +_i∈gk_ + + + +Here, _gk_ is the set of image fragments in image _k_ and _gl_ +is the set of sentence fragments in sentence _l_ . The indices +_k, l_ range over the images and sentences in the training set. +Together with their additional Multiple Instance Learning +objective, this score carries the interpretation that a sentence +fragment aligns to a subset of the image regions whenever +the dot product is positive. We found that the following +reformulation simplifies the model and alleviates the need +for additional objectives and their hyperparameters: + +_Skl_ = - _maxi∈gk_ _vi_ _[T]_ _[s][t][.]_ (8) + +_t∈gl_ + + +Here, every word _st_ aligns to the single best image region. +As we show in the experiments, this simplified model also +leads to improvements in the final ranking performance. +Assuming that _k_ = _l_ denotes a corresponding image and +sentence pair, the final max-margin, structured loss remains: + + + +_E_ ( **a** ) = + + + +- _ψj_ _[U]_ [(] _[a][j]_ [) +] +_j_ =1 _...N_ _j_ =1 _...N_ + + + +Figure 3. Diagram for evaluating the image-sentence score _Skl_ . +Object regions are embedded with a CNN (left). Words (enriched +by their context) are embedded in the same multimodal space with +a BRNN (right). Pairwise similarities are computed with inner +products (magnitudes shown in grayscale) and finally reduced to +image-sentence score with Equation 8. + + +alignment to the same region. Concretely, given a sentence +with _N_ words and an image with _M_ bounding boxes, we +introduce the latent alignment variables _aj ∈{_ 1 _. . . M_ _}_ for +_j_ = 1 _. . . N_ and formulate an MRF in a chain structure +along the sentence as follows: + + + + + - _ψj_ _[B]_ [(] _[a][j][, a][j]_ [+1][)] (10) + +_j_ =1 _...N_ _−_ 1 + + + +�� + + + +(9) + + + +_C_ ( _θ_ ) = + + +_max_ (0 _, Skl −_ _Skk_ + 1) + +_l_ + + + +_ψj_ _[U]_ [(] _[a][j]_ [=] _[ t]_ [) =] _[ v]_ _i_ _[T]_ _[s][t]_ (11) + +_ψj_ _[B]_ [(] _[a][j][, a][j]_ [+1][) =] _[ β]_ [1] [[] _[a][j]_ [=] _[ a][j]_ [+1][]] _[.]_ (12) + + +Here, _β_ is a hyperparameter that controls the affinity towards longer word phrases. This parameter allows us to +interpolate between single-word alignments ( _β_ = 0) and +aligning the entire sentence to a single, maximally scoring +region when _β_ is large. We minimize the energy to find the +best alignments **a** using dynamic programming. The output +of this process is a set of image regions annotated with segments of text. We now describe an approach for generating +novel phrases based on these correspondences. + + +**3.2. Multimodal Recurrent Neural Network for** +**generating descriptions** + + +In this section we assume an input set of images and their +textual descriptions. These could be full images and their +sentence descriptions, or regions and text snippets, as inferred in the previous section. The key challenge is in the +design of a model that can predict a variable-sized sequence +of outputs given an image. In previously developed language models based on Recurrent Neural Networks (RNNs) + +[40, 50, 10], this is achieved by defining a probability distribution of the next word in a sequence given the current word +and context from previous time steps. We explore a simple + + + +_k_ + + + + +~~�~~ - ~~�~~ rank images + ++ - _max_ (0 _, Slk −_ _Skk_ + 1) + + +_l_ + + +~~�~~ ~~�~~ - ~~�~~ +rank sentences + + + + +_._ + + + +This objective encourages aligned image-sentences pairs to +have a higher score than misaligned pairs, by a margin. + + +**3.1.4** **Decoding text segment alignments to images** + + +Consider an image from the training set and its corresponding sentence. We can interpret the quantity _vi_ _[T]_ _[s][t]_ [ as the un-] +normalized log probability of the _t_ -th word describing any +of the bounding boxes in the image. However, since we are +ultimately interested in generating snippets of text instead +of single words, we would like to align extended, contiguous sequences of words to a single bounding box. Note that +the na¨ıve solution that assigns each word independently to +the highest-scoring region is insufficient because it leads to +words getting scattered inconsistently to different regions. + + +To address this issue, we treat the true alignments as latent +variables in a Markov Random Field (MRF) where the binary interactions between neighboring words encourage an + + +but effective extension that additionally conditions the generative process on the content of an input image. More formally, during training our Multimodal RNN takes the image +pixels _I_ and a sequence of input vectors ( _x_ 1 _, . . ., xT_ ). It +then computes a sequence of hidden states ( _h_ 1 _, . . ., ht_ ) and +a sequence of outputs ( _y_ 1 _, . . ., yt_ ) by iterating the following +recurrence relation for _t_ = 1 to _T_ : + + +_bv_ = _Whi_ [ _CNNθc_ ( _I_ )] (13) + +_ht_ = _f_ ( _Whxxt_ + _Whhht−_ 1 + _bh_ + 1 ( _t_ = 1) _⊙_ _bv_ ) (14) + +_yt_ = _softmax_ ( _Wohht_ + _bo_ ) _._ (15) + + +In the equations above, _Whi, Whx, Whh, Woh, xi_ and _bh, bo_ +are learnable parameters, and _CNNθc_ ( _I_ ) is the last layer of +a CNN. The output vector _yt_ holds the (unnormalized) log +probabilities of words in the dictionary and one additional +dimension for a special END token. Note that we provide +the image context vector _bv_ to the RNN only at the first +iteration, which we found to work better than at each time +step. In practice we also found that it can help to also pass +both _bv,_ ( _Whxxt_ ) through the activation function. A typical +size of the hidden layer of the RNN is 512 neurons. + + +**RNN training.** The RNN is trained to combine a word ( _xt_ ), +the previous context ( _ht−_ 1) to predict the next word ( _yt_ ). +We condition the RNN’s predictions on the image information ( _bv_ ) via bias interactions on the first step. The training +proceeds as follows (refer to Figure 4): We set _h_ 0 = _[⃗]_ 0, _x_ 1 to +a special START vector, and the desired label _y_ 1 as the first +word in the sequence. Analogously, we set _x_ 2 to the word +vector of the first word and expect the network to predict +the second word, etc. Finally, on the last step when _xT_ represents the last word, the target label is set to a special END +token. The cost function is to maximize the log probability +assigned to the target labels (i.e. Softmax classifier). + + +**RNN at test time.** To predict a sentence, we compute the +image representation _bv_, set _h_ 0 = 0, _x_ 1 to the START vector and compute the distribution over the first word _y_ 1. We +sample a word from the distribution (or pick the argmax), +set its embedding vector as _x_ 2, and repeat this process until +the END token is generated. In practice we found that beam +search (e.g. beam size 7) can improve results. + + +**3.3. Optimization** +We use SGD with mini-batches of 100 image-sentence pairs +and momentum of 0.9 to optimize the alignment model. We +cross-validate the learning rate and the weight decay. We +also use dropout regularization in all layers except in the +recurrent layers [59] and clip gradients elementwise at 5 +(important). The generative RNN is more difficult to optimize, party due to the word frequency disparity between +rare words and common words (e.g. ”a” or the END token). +We achieved the best results using RMSprop [52], which is +an adaptive step size method that scales the update of each +weight by a running average of its gradient norm. + + + +Figure 4. Diagram of our multimodal Recurrent Neural Network +generative model. The RNN takes a word, the context from previous time steps and defines a distribution over the next word in the +sentence. The RNN is conditioned on the image information at the +first time step. START and END are special tokens. + + +**4. Experiments** + +**Datasets.** We use the Flickr8K [21], Flickr30K [58] and +MSCOCO [37] datasets in our experiments. These datasets +contain 8,000, 31,000 and 123,000 images respectively +and each is annotated with 5 sentences using Amazon +Mechanical Turk. For Flickr8K and Flickr30K, we use +1,000 images for validation, 1,000 for testing and the rest +for training (consistent with [21, 24]). For MSCOCO we +use 5,000 images for both validation and testing. + + +**Data Preprocessing.** We convert all sentences to lowercase, discard non-alphanumeric characters. We filter words +to those that occur at least 5 times in the training set, +which results in 2538, 7414, and 8791 words for Flickr8k, +Flickr30K, and MSCOCO datasets respectively. + + +**4.1. Image-Sentence Alignment Evaluation** +We first investigate the quality of the inferred text and image +alignments with ranking experiments. We consider a withheld set of images and sentences and retrieve items in one +modality given a query from the other by sorting based on +the image-sentence score _Skl_ (Section 3.1.3). We report the +median rank of the closest ground truth result in the list and +Recall@K, which measures the fraction of times a correct +item was found among the top K results. The result of these +experiments can be found in Table 1, and example retrievals +in Figure 5. We now highlight some of the takeaways. + + +**Our full model outperforms previous work.** First, our +full model (“Our model: BRNN”) outperforms Socher et +al. [49] who trained with a similar loss but used a single +image representation and a Recursive Neural Network over +the sentence. A similar loss was adopted by Kiros et al. + +[25], who use an LSTM [20] to encode sentences. We list +their performance with a CNN that is equivalent in power +(AlexNet [28]) to the one used in this work, though similar to [54] they outperform our model with a more powerful +CNN (VGGNet [47], GoogLeNet [51]). “DeFrag” are the +results reported by Karpathy et al. [24]. Since we use different word vectors, dropout for regularization and different +cross-validation ranges and larger embedding sizes, we reimplemented their loss for a fair comparison (“Our imple + +Image Annotation Image Search +**Model** **R@1** **R@5** **R@10** **Med** _r_ **R@1** **R@5** **R@10** **Med** _r_ +**Flickr30K** + +|SDT-RNN (Socher et al. [49])
Kiros et al. [25]
Mao et al. [38]
Donahue et al. [8]
DeFrag (Karpathy et al. [24])
Our implementation of DeFrag [24]
Our model: DepTree edges
Our model: BRNN|9.6 29.8 41.1 16
14.8 39.2 50.9 10
18.4 40.2 50.9 10
17.5 40.3 50.8 9
14.2 37.7 51.3 10
19.2 44.5 58.0 6.0
20.0 46.6 59.4 5.4
22.2 48.2 61.4 4.8|8.9 29.8 41.1 16
11.8 34.0 46.3 13
12.6 31.2 41.5 16
- - - -
10.2 30.8 44.2 14
12.9 35.4 47.5 10.8
15.0 36.5 48.2 10.4
15.2 37.7 50.5 9.2| +|---|---|---| +|Vinyals et al. [54] (more powerful CNN)|23
-
63
5|17
-
57
8| + + + +**MSCOCO** +Our model: 1K test images 38.4 69.9 80.5 1.0 27.4 60.2 74.8 3.0 +Our model: 5K test images 16.5 39.2 52.0 9.0 10.7 29.6 42.2 14.0 + + +Table 1. Image-Sentence ranking experiment results. **R@K** is Recall@K (high is good). **Med** _r_ is the median rank (low is good). In the +results for our models, we take the top 5 validation set models, evaluate each independently on the test set and then report the average +performance. The standard deviations on the recall values range from approximately 0.5 to 1.0. + + +Figure 5. Example alignments predicted by our model. For every test image above, we retrieve the most compatible test sentence and +visualize the highest-scoring region for each word (before MRF smoothing described in Section 3.1.4) and the associated scores ( _vi_ _[T]_ _[s][t]_ [).] +We hide the alignments of low-scoring words to reduce clutter. We assign each region an arbitrary color. + + + +mentation of DeFrag”). Compared to other work that uses +AlexNets, our full model shows consistent improvement. + + +**Our simpler cost function improves performance.** We +strive to better understand the source of our performance. +First, we removed the BRNN and used dependency tree relations exactly as described in Karpathy et al. [24] (“Our +model: DepTree edges”). The only difference between this +model and “Our reimplementation of DeFrag” is the new, +simpler cost function introduced in Section 3.1.3. We see +that our formulation shows consistent improvements. + + +**BRNN outperforms dependency tree relations** . Furthermore, when we replace the dependency tree relations with +the BRNN we observe additional performance improvements. Since the dependency relations were shown to work +better than single words and bigrams [24], this suggests that +the BRNN is taking advantage of contexts longer than two +words. Furthermore, our method does not rely on extracting +a Dependency Tree and instead uses the raw words directly. + + +**MSCOCO results for future comparisons.** We are not +aware of other published ranking results on MSCOCO. + + + +Therefore, we report results on a subset of 1,000 images +and the full set of 5,000 test images for future comparisons. +Note that the 5000 images numbers are lower since Recall@K is a function of test set size. + + +**Qualitative.** As can be seen from example groundings in +Figure 5, the model discovers interpretable visual-semantic +correspondences, even for small or relatively rare objects +such as an _“accordion”_ . These would be likely missed by +models that only reason about full images. + + +**Learned region and word vector magnitudes.** An appealing feature of our model is that it learns to modulate +the magnitude of the region and word embeddings. Due +to their inner product interaction, we observe that representations of visually discriminative words such as _“kayak-_ +_ing, pumpkins“_ have embedding vectors with higher magnitudes, which in turn translates to a higher influence on +the image-sentence score. Conversely, stop words such as +_“now, simply, actually, but”_ are mapped near the origin, +which reduces their influence. See more analysis in supplementary material. + + +|Col1|Flickr8K Flickr30K MSCOCO 2014|Col3|Col4| +|---|---|---|---| +|**Model**|B-1
B-2
B-3
B-4|B-1
B-2
B-3
B-4|B-1
B-2
B-3
B-4
METEOR
CIDEr| +|Nearest Neighbor
Mao et al. [38]
Google NIC [54]
LRCN [8]
MS Research [12]
Chen and Zitnick [5]
Our model|—



58
28
23

63
41
27












14.1
57.9
38.3
24.5
16.0|—



55
24
20

66.3
42.3
27.7
18.3
58.8
39.1
25.1
16.5







12.6
57.3
36.9
24.0
15.7|48.0
28.1
16.6
10.0
15.7
38.3






66.6
46.1
32.9
24.6


62.8
44.2
30.4






21.1
20.7




19.0
20.4

62.5
45.0
32.1
23.0
19.5
66.0| + + +Table 2. Evaluation of full image predictions on 1,000 test images. **B-n** is BLEU score that uses up to n-grams. High is good in all columns. +For future comparisons, our METEOR/CIDEr Flickr8K scores are 16.7/31.8 and the Flickr30K scores are 15.3/24.7. + + +Figure 6. Example sentences generated by the multimodal RNN for test images. We provide many more examples on our project page. + + + +**4.2. Generated Descriptions: Fulframe evaluation** + +We now evaluate the ability of our RNN model to describe +images and regions. We first trained our Multimodal RNN +to generate sentences on full images with the goal of verifying that the model is rich enough to support the mapping +from image data to sequences of words. For these full image experiments we use the more powerful VGGNet image +features [47]. We report the BLEU [44], METEOR [7] and +CIDEr [53] scores computed with the coco-caption +code [4] [2] . Each method evaluates a _candidate_ sentence +by measuring how well it matches a set of five _reference_ +sentences written by humans. + + +**Qualitative.** The model generates sensible descriptions of +images (see Figure 6), although we consider the last two +images failure cases. The first prediction _“man in black_ +_shirt is playing a guitar”_ does not appear in the training set. +However, there are 20 occurrences of “man in black shirt” +and 60 occurrences of “is paying guitar”, which the model +may have composed to describe the first image. In general, +we find that a relatively large portion of generated sentences +(60% with beam size 7) can be found in the training data. +This fraction decreases with lower beam size; For instance, +with beam size 1 this falls to 25%, but the performance also +deteriorates (e.g. from 0.66 to 0.61 CIDEr). + + +**Multimodal RNN outperforms retrieval baseline.** Our +first comparison is to a nearest neighbor retrieval baseline. + + +2https://github.com/tylin/coco-caption + + + +Here, we annotate each test image with a sentence of the +most similar training set image as determined by L2 norm +over VGGNet [47] fc7 features. Table 2 shows that the Multimodal RNN confidently outperforms this retrieval method. +Hence, even with 113,000 train set images in MSCOCO +the retrieval approach is inadequate. Additionally, the RNN +takes only a fraction of a second to evaluate per image. + + +**Comparison to other work.** Several related models have +been proposed in Arxiv preprints since the original submission of this work. We also include these in Table 2 for comparison. Most similar to our model is Vinyals et al. [54]. +Unlike this work where the image information is communicated through a bias term on the first step, they incorporate it as a first word, they use a more powerful but more +complex sequence learner (LSTM [20]), a different CNN +(GoogLeNet [51]), and report results of a model ensemble. +Donahue et al. [8] use a 2-layer factored LSTM (similar +in structure to the RNN in Mao et al. [38]). Both models +appear to work worse than ours, but this is likely in large +part due to their use of the less powerful AlexNet [28] features. Compared to these approaches, our model prioritizes +simplicity and speed at a slight cost in performance. + + +**4.3. Generated Descriptions: Region evaluation** + + +We now train the Multimodal RNN on the correspondences +between image regions and snippets of text, as inferred by +the alignment model. To support the evaluation, we used +Amazon Mechanical Turk (AMT) to collect a new dataset + + +Figure 7. Example region predictions. We use our region-level multimodal RNN to generate text (shown on the right of each image) for +some of the bounding boxes in each image. The lines are grounded to centers of bounding boxes and the colors are chosen arbitrarily. + + + +of region-level annotations that we only use at test time. The +labeling interface displayed a single image and asked annotators (we used nine per image) to draw five bounding boxes +and annotate each with text. In total, we collected 9,000 text +snippets for 200 images in our MSCOCO test split (i.e. 45 +snippets per image). The snippets have an average length of +2.3 words. Example annotations include _“sports car”, “el-_ +_derly couple sitting”, “construction site”, “three dogs on_ +_leashes”, “chocolate cake”_ . We noticed that asking annotators for grounded text snippets induces language statistics +different from those in full image captions. Our region annotations are more comprehensive and feature elements of +scenes that would rarely be considered salient enough to be +included in a single sentence sentence about the full image, +such as _“heating vent”, “belt buckle”, and “chimney”_ . + + +**Qualitative** . We show example region model predictions +in Figure 7. To reiterate the difficulty of the task, consider +for example the phrase _“table with wine glasses”_ that is +generated on the image on the right in Figure 7. This phrase +only occurs in the training set 30 times. Each time it may +have a different appearance and each time it may occupy a +few (or none) of our object bounding boxes. To generate +this string for the region, the model had to first correctly +learn to ground the string and then also learn to generate it. + + +**Region model outperforms full frame model and rank-** +**ing baseline** . Similar to the full image description task, we +evaluate this data as a prediction task from a 2D array of +pixels (one image region) to a sequence of words and record +the BLEU score. The ranking baseline retrieves training +sentence substrings most compatible with each region as +judged by the BRNN model. Table 3 shows that the region +RNN model produces descriptions most consistent with our +collected data. Note that the fullframe model was trained +only on full images, so feeding it smaller image regions +deteriorates its performance. However, its sentences are +also longer than the region model sentences, which likely +negatively impacts the BLEU score. The sentence length +is non-trivial to control for with an RNN, but we note that +the region model also outperforms the fullframe model on +all other metrics: CIDEr 61.6/20.3, METEOR 15.8/13.3, +ROUGE 35.1/21.0 for region/fullframe respectively. + + + +|Model|B-1 B-2 B-3 B-4| +|---|---| +|Human agreement|61.5
45.2
30.1
22.0| +|Nearest Neighbor
RNN: Fullframe model
RNN: Region level model|22.9
10.5
0.0
0.0
14.2
6.0
2.2
0.0
**35.2**
**23.0**
**16.1**
**14.8**| + + +Table 3. BLEU score evaluation of image region annotations. + + +**4.4. Limitations** + +Although our results are encouraging, the Multimodal RNN +model is subject to multiple limitations. First, the model can +only generate a description of one input array of pixels at a +fixed resolution. A more sensible approach might be to use +multiple saccades around the image to identify all entities, +their mutual interactions and wider context before generating a description. Additionally, the RNN receives the image +information only through additive bias interactions, which +are known to be less expressive than more complicated multiplicative interactions [50, 20]. Lastly, our approach consists of two separate models. Going directly from an imagesentence dataset to region-level annotations as part of a single model trained end-to-end remains an open problem. + + +**5. Conclusions** + + +We introduced a model that generates natural language descriptions of image regions based on weak labels in form of +a dataset of images and sentences, and with very few hardcoded assumptions. Our approach features a novel ranking +model that aligned parts of visual and language modalities +through a common, multimodal embedding. We showed +that this model provides state of the art performance on +image-sentence ranking experiments. Second, we described +a Multimodal Recurrent Neural Network architecture that +generates descriptions of visual data. We evaluated its performance on both fullframe and region-level experiments +and showed that in both cases the Multimodal RNN outperforms retrieval baselines. + + +**Acknowledgements.** +We thank Justin Johnson and Jon Krause for helpful comments and discussions. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs +used for this research. This research is partially supported +by an ONR MURI grant, and NSF ISS-1115313. + + +**References** + + +[1] A. Barbu, A. Bridge, Z. Burchill, D. Coroian, S. Dickinson, S. Fidler, A. Michaux, S. Mussman, S. Narayanaswamy, +D. Salvi, et al. Video in sentences out. _arXiv preprint_ +_arXiv:1204.2742_, 2012. + +[2] K. Barnard, P. Duygulu, D. Forsyth, N. De Freitas, D. M. +Blei, and M. I. Jordan. Matching words and pictures. _JMLR_, +2003. + +[3] Y. Bengio, H. Schwenk, J.-S. Sen´ecal, F. Morin, and J.-L. +Gauvain. Neural probabilistic language models. In _Innova-_ +_tions in Machine Learning_ . Springer, 2006. + +[4] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick. Microsoft coco captions: Data collection and evaluation server. _arXiv preprint arXiv:1504.00325_, +2015. + +[5] X. Chen and C. L. Zitnick. Learning a recurrent visual representation for image caption generation. _CoRR_, +abs/1411.5654, 2014. + +[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. Imagenet: A large-scale hierarchical image database. In +_CVPR_, 2009. + +[7] M. Denkowski and A. Lavie. Meteor universal: Language +specific translation evaluation for any target language. In +_Proceedings of the EACL 2014 Workshop on Statistical Ma-_ +_chine Translation_, 2014. + +[8] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, +S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. _arXiv preprint arXiv:1411.4389_, 2014. + +[9] D. Elliott and F. Keller. Image description using visual dependency representations. In _EMNLP_, pages 1292–1302, +2013. + +[10] J. L. Elman. Finding structure in time. _Cognitive science_, +14(2):179–211, 1990. + +[11] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and +A. Zisserman. The pascal visual object classes (voc) challenge. _International Journal of Computer Vision_, 88(2):303– +338, June 2010. + +[12] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, +P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt, et al. +From captions to visual concepts and back. _arXiv preprint_ +_arXiv:1411.4952_, 2014. + +[13] A. Farhadi, M. Hejrati, M. A. Sadeghi, P. Young, +C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every picture tells a story: Generating sentences from images. In +_ECCV_ . 2010. + +[14] L. Fei-Fei, A. Iyer, C. Koch, and P. Perona. What do we +perceive in a glance of a real-world scene? _Journal of vision_, +7(1):10, 2007. + +[15] S. Fidler, A. Sharma, and R. Urtasun. A sentence is worth a +thousand pixels. In _CVPR_, 2013. + +[16] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, +T. Mikolov, et al. Devise: A deep visual-semantic embedding model. In _NIPS_, 2013. + +[17] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic +segmentation. In _CVPR_, 2014. + + + + +[18] S. Gould, R. Fulton, and D. Koller. Decomposing a scene +into geometric and semantically consistent regions. In _Com-_ +_puter Vision, 2009 IEEE 12th International Conference on_, +pages 1–8. IEEE, 2009. + +[19] A. Gupta and P. Mannem. From image annotation to image description. In _Neural information processing_ . Springer, +2012. + +[20] S. Hochreiter and J. Schmidhuber. Long short-term memory. +_Neural computation_, 9(8):1735–1780, 1997. + +[21] M. Hodosh, P. Young, and J. Hockenmaier. Framing image +description as a ranking task: data, models and evaluation +metrics. _Journal of Artificial Intelligence Research_, 2013. + +[22] R. JeffreyPennington and C. Manning. Glove: Global vectors for word representation. 2014. + +[23] Y. Jia, M. Salzmann, and T. Darrell. Learning cross-modality +similarity for multinomial data. In _ICCV_, 2011. + +[24] A. Karpathy, A. Joulin, and L. Fei-Fei. Deep fragment embeddings for bidirectional image sentence mapping. _arXiv_ +_preprint arXiv:1406.5679_, 2014. + +[25] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying +visual-semantic embeddings with multimodal neural language models. _arXiv preprint arXiv:1411.2539_, 2014. + +[26] R. Kiros, R. S. Zemel, and R. Salakhutdinov. Multimodal +neural language models. _ICML_, 2014. + +[27] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. Fidler. What +are you talking about? text-to-image coreference. In _CVPR_, +2014. + +[28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet +classification with deep convolutional neural networks. In +_NIPS_, 2012. + +[29] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, +and T. L. Berg. Baby talk: Understanding and generating +simple image descriptions. In _CVPR_, 2011. + +[30] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and +Y. Choi. Collective generation of natural image descriptions. +In _ACL_, 2012. + +[31] P. Kuznetsova, V. Ordonez, T. L. Berg, U. C. Hill, and +Y. Choi. Treetalk: Composition and compression of trees +for image descriptions. _Transactions of the Association for_ +_Computational Linguistics_, 2(10):351–362, 2014. + +[32] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. _Proceed-_ +_ings of the IEEE_, 86(11):2278–2324, 1998. + +[33] L.-J. Li and L. Fei-Fei. What, where and who? classifying +events by scene and object recognition. In _ICCV_, 2007. + +[34] L.-J. Li, R. Socher, and L. Fei-Fei. Towards total scene understanding: Classification, annotation and segmentation in +an automatic framework. In _Computer Vision and Pattern_ +_Recognition, 2009. CVPR 2009. IEEE Conference on_, pages +2036–2043. IEEE, 2009. + +[35] S. Li, G. Kulkarni, T. L. Berg, A. C. Berg, and Y. Choi. Composing simple image descriptions using web-scale n-grams. +In _CoNLL_, 2011. + +[36] D. Lin, S. Fidler, C. Kong, and R. Urtasun. Visual semantic +search: Retrieving videos via complex textual queries. 2014. + + +[37] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context. _arXiv preprint arXiv:1405.0312_, +2014. + +[38] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. Explain +images with multimodal recurrent neural networks. _arXiv_ +_preprint arXiv:1410.1090_, 2014. + +[39] C. Matuszek*, N. FitzGerald*, L. Zettlemoyer, L. Bo, and +D. Fox. A Joint Model of Language and Perception for +Grounded Attribute Learning. In _Proc. of the 2012 Interna-_ +_tional Conference on Machine Learning_, Edinburgh, Scotland, June 2012. + +[40] T. Mikolov, M. Karafi´at, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based language model. In +_INTERSPEECH_, 2010. + +[41] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and +J. Dean. Distributed representations of words and phrases +and their compositionality. In _NIPS_, 2013. + +[42] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, +A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daum´e, +III. Midge: Generating image descriptions from computer +vision detections. In _EACL_, 2012. + +[43] V. Ordonez, G. Kulkarni, and T. L. Berg. Im2text: Describing images using 1 million captioned photographs. In _NIPS_, +2011. + +[44] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a +method for automatic evaluation of machine translation. In +_Proceedings of the 40th annual meeting on association for_ +_computational linguistics_, pages 311–318. Association for +Computational Linguistics, 2002. + +[45] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, +S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, +A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge, 2014. + +[46] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural +networks. _Signal Processing, IEEE Transactions on_, 1997. + +[47] K. Simonyan and A. Zisserman. Very deep convolutional +networks for large-scale image recognition. _arXiv preprint_ +_arXiv:1409.1556_, 2014. + +[48] R. Socher and L. Fei-Fei. Connecting modalities: Semisupervised segmentation and annotation of images using unaligned text corpora. In _CVPR_, 2010. + +[49] R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. +Ng. Grounded compositional semantics for finding and describing images with sentences. _TACL_, 2014. + +[50] I. Sutskever, J. Martens, and G. E. Hinton. Generating text +with recurrent neural networks. In _ICML_, 2011. + +[51] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, +D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. _arXiv preprint_ +_arXiv:1409.4842_, 2014. + +[52] T. Tieleman and G. E. Hinton. Lecture 6.5-rmsprop: Divide +the gradient by a running average of its recent magnitude., +2012. + +[53] R. Vedantam, C. L. Zitnick, and D. Parikh. Cider: +Consensus-based image description evaluation. _CoRR_, +abs/1411.5726, 2014. + + + + +[54] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show +and tell: A neural image caption generator. _arXiv preprint_ +_arXiv:1411.4555_, 2014. + +[55] Y. Yang, C. L. Teo, H. Daum´e III, and Y. Aloimonos. +Corpus-guided sentence generation of natural images. In +_EMNLP_, 2011. + +[56] B. Z. Yao, X. Yang, L. Lin, M. W. Lee, and S.-C. Zhu. I2t: +Image parsing to text description. _Proceedings of the IEEE_, +98(8):1485–1508, 2010. + +[57] M. Yatskar, L. Vanderwende, and L. Zettlemoyer. See no +evil, say no evil: Description generation from densely labeled images. _Lexical and Computational Semantics_, 2014. + + +[58] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. _TACL_, +2014. + +[59] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. _arXiv preprint arXiv:1409.2329_, +2014. + +[60] C. L. Zitnick, D. Parikh, and L. Vanderwende. Learning the +visual interpretation of sentences. _ICCV_, 2013. + + +**6. Supplementary Material** + + +**6.1. Magnitude modulation** + + +An appealing feature of our alignment model is that it learns +to modulate the importance of words and regions by scaling +the magnitude of their corresponding embedding vectors. +To see this, recall that we compute the image-sentence similarity between image _k_ and sentence _l_ as follows: + + +_Skl_ = - _maxi∈gk_ _vi_ _[T]_ _[s][t][.]_ (16) + +_t∈gl_ + + +**Disciminative words.** As a result of this formulation, +we observe that representations of visually discriminative +words such as _“kayaking, pumpkins“_ tend to have higher +magnitude in the embedding space, which translates to a +higher influence on the final image-sentence scores due to +the inner product. Conversely, the model learns to map stop +words such as _“now, simply, actually, but”_ near the origin, which reduces their influence. Table 4 show the top +40 words with highest and lowest magnitudes _∥st∥_ . + + +**Disciminative regions.** Similarly, image regions that contain discriminative entities are assigned vectors of higher +magnitudes by our model. This can be be interpreted as a +measure of visual saliency, since these regions would produced large scores if their textual description was present in +a corresponding sentence. We show the regions with high +magnitudes in Figure 8. Notice the common occurrence of +often described regions such as balls, bikes, helmets. + + +Figure 8. Flickr30K test set regions with high vector magnitude. + + + +|Magnitude|Word|Magnitude|Word| +|---|---|---|---| +|0.42
0.42
0.43
0.44
0.44
0.45
0.45
0.46
0.47
0.47
0.47
0.47
0.47
0.47
0.48
0.48
0.48
0.48
0.48
0.48
0.48
0.49
0.49
0.50
0.50
0.50
0.50
0.50
0.50
0.51
0.51
0.51
0.51
0.51
0.51
0.51
0.51
0.51
0.52
0.52|now
simply
actually
but
neither
then
still
obviously
that
which
felt
not
might
because
appeared
therefore
been
if
also
only
so
would
yet
be
had
revealed
never
very
without
they
either
could
feel
otherwise
when
already
being
else
just
ones|2.61
2.59
2.59
2.58
2.56
2.54
2.54
2.54
2.52
2.52
2.51
2.51
2.50
2.50
2.50
2.48
2.48
2.48
2.47
2.47
2.46
2.46
2.46
2.46
2.46
2.46
2.46
2.46
2.45
2.43
2.43
2.43
2.42
2.42
2.42
2.42
2.41
2.41
2.40
2.40|kayaking
trampoline
pumpkins
windsurfing
wakeboard
acrobatics
sousaphone
skydivers
wakeboarders
skateboard
snowboarder
wakeboarder
skydiving
guitar
snowboard
kitchen
paraglider
ollie
firetruck
gymnastics
waterfalls
motorboat
fryer
skateboarding
dulcimer
waterfall
backflips
unicyclist
kayak
costumes
wakeboarding
trike
dancers
cupcakes
tuba
skijoring
firewood
elevators
cranes
bassoon| + + +Table 4. This table shows the top magnitudes of vectors ( _∥st∥_ ) for +words in Flickr30K. Since the magnitude of individual words in +our model is also a function of their surrounding context in the +sentence, we report the average magnitude. + + +**6.2. Alignment model** + + +**Learned appearance of text snippets** . We can query our +alignment model with a piece of text and retrieve individual +image regions that have the highest score with that snippet. We show examples of such queries in Figure 9 and +Figure 10. Notice that the model is sensitive to compound +words and modifiers. For example, _“red bus”_ and _“yel-_ +_low bus”_ give very different results. Similarly, _“bird flying_ +_in the sky”_ and _“bird on a tree branch”_ give different results. Additionally, it can be seen that the quality of the +results deteriorates for less frequently occurring concepts, +such as _“roof”_ or _“straw hat”_ . However, we emphasize that +the model learned these visual appearances of text snippets +from raw data of full images and sentences, without any explicit correspondences. + + +**Additional alignment visualizations** . See additional examples of inferred alignments between image regions and +words in Figure 11. Note that one limitation of our model is +that it does not explicitly handle or support counting. For instance, the last example we show contains the phrase _“three_ +_people”_ . These words should align to the three people in +the image, but our model puts the bounding box around two +of the people. In doing so, the model may be taking advantage of the BRNN structure to modify the “people” vector +to preferentially align to regions that contain multiple people. However, this is still unsatisfying because such spurious detections only exist as a result of an error in the RCNN +inference process, which presumably failed to localize the +individual people. + + +**Web demo** . We have published a web demo that displays +our alignments for all images in the test set [3] . + + +**Additional Flickr8K experiments** . We omitted ranking +experiment results from our paper due to space constraints, +but these can be found in Table 5 + + +**Counting** . We experimented with losses that perform probabilistic inference in the forward pass that explicitly tried +to localize exactly three distinct people in the image. However, this worked poorly because while the RCNN is good +at finding people, it is not very good at localizing them. For +instance, a single person can easily yield multiple detections +(the head, the torso, or the full body, for example). We were +not able to come up with a simple approach to collapsing +these into a single detection (non-maxim suppression by itself was not sufficient in our experiments). Note that this +ambiguity is partly an artifact of the training data. For example, torsos of people can often be labeled alone if the +body is occluded. We are therefore lead to believe that this +additional modeling step is highly non-trivial and a worthy +subject of future work. + + +3 http://cs.stanford.edu/people/karpathy/deepimagesent/rankingdemo/ + + + +**Plug and play use of Natural Language Processing** +**toolkits.** Before adopting the BRNN approach, we also +tried to use Natural Language Processing toolkits to process +the input sentences into graphs of noun phrases and their binary relations. For instance, in the sentence _“a brown dog is_ +_chasing a young child”_, the toolkit would infer that there are +two noun phrases ( _“a brown dog”, “young child”_ ), joined +by a binary relationship of _“chasing”_ . We then developed +a CRF that inferred the grounding of these noun phrases to +the detection bounding boxes in the image with a unary appearance model and a spatial binary model. However, this +endeavor proved fruitless. First, performing CRF-like inference during the forward pass of a Neural Network proved +to be extremely slow. Second, we found that there is surprisingly little information in the relative spatial positions +between bounding boxes. For instance, almost any two +bounding boxes in the image could correspond to the action of _“chasing”_ due to huge amount of possibly camera +views of a scene. Hence, we were unable to extract enough +signal from the binary relations in the coordinate system +of the image and suspect that more complex 3-dimensional +reasoning may be required. Lastly, we found that NLP tools +(when used out of the box) introduce a large amount of mistakes in the extracted parse trees, dependency trees and parts +of speech tags. We tried to fix these with complex rules and +exceptions, but ultimately decided to abandon the idea. We +believe that part of the problem is that these tools are usually +trained on different text corpora (e.g. news articles), so image captions are outside of their domain of competence. In +our experience, adopting the BRNN model instead of this +approach provided immediate performance improvements +and produced significant reductions in code complexity. + + +**6.3. Additional examples: Image annotation** + + +Additional examples of generated captions on the full image level can be found in Figure 12 (and our website). The +model often gets the right gist of the scene, but sometimes +guesses specific fine-grained words incorrectly. We expect +that reasoning not only the global level of the image but also +on the level of objects will significantly improve these results. We find the last example ( _“woman in bikini is jumping_ +_over hurdle”_ ) to be especially illuminating. This sentence +does not occur in the training data. Our general qualitative +impression of the model is that it learns certain templates, +e.g. _“in is in ”_, and then +fills these in based on textures in the image. In this particular case, the volleyball net has the visual appearance of a +hurdle, which may have caused the model to insert it as a +noun (along with the woman) into one of its learned sentence templates. + + +**6.4. Additional examples: Region annotation** + + +Additional examples of region annotations can be found +in Figure 13. Note that we annotate regions based on the +content of each image region alone, which can cause erroneous predictions when not enough context is available in +the bounding box (e.g. a generated description that says +“container” detected on the back of a dog’s head in the image on the right, in the second row). We found that one effective way of using the contextual information and improving the predictions is to concatenate the fullframe feature +CNN vector to the vector of the region of interest, giving +8192-dimensional input vector the to RNN. However, we +chose to omit these experiments in our paper to preserve the +simplicity of the mode, and because we believe that cleaner +and more principled approaches to this challenge can be developed. + + +**6.5. Training the Multimodal RNN** + + +There are a few tricks needed to get the Multimodal RNN to +train efficiently. We found that **clipping the gradients** (we +only experimented with simple per-element clipping) at an +appropriate value consistently gave better results and helped +on the validation data. As mentioned in our paper, we experimented with SGD, SGD+Momentum, Adadelta, Adagrad, +but found **RMSProp** to give best results. However, some +SGD checkpoints usually also converged to nearby validation performance vicinity. Moreover, the distribution of the +words in English language are highly non-uniform. Therefore, the model spends the first few iterations mostly learning the biases for the Softmax classifier such that it is predicting every word at random with the appropriate dataset +frequency. We found that we could obtain faster convergence early in the training (and nicer loss curves) by explicitly **initializing the biases** of all words in the dictionary (in +the Softmax classifier) to log probability of their occurrence +in the training data. Therefore, with small weights and biases set appropriately the model right away predicts word +at random according to their chance distribution. After submission of our original paper we performed additional experiments with comparing an RNN to an LSTM and found +that **LSTMs** consistently produced better results, but took +longer to train. Lastly, we initially used word2vec vectors +as our word representations _xi_, but found that it was sufficient to train these vectors from random initialization without changes in the final performance. Moreover, we found +that the word2vec vectors have some unappealing properties +when used in multimodal language-visual tasks. For example, all colors (e.g. red, blue, green) are clustered nearby +in the word2vec representation because they are relatively +interchangeable in most language contexts. However, their +visual instantiations are very different. + + +“glass of wine” + + +“yellow bus” + + +“closeup of zebra” + + +“sprinkled donut” + + +“shiny laptop” + + +Figure 9. Examples of highest scoring regions for queried snippets of text, on 5,000 images of our MSCOCO test set. + + +“bird flying in the sky” + + +“bird sitting on roof” + + +“closeup of fruit” + + +“man riding a horse” + + +Figure 10. Examples of highest scoring regions for queried snippets of text, on 5,000 images of our MSCOCO test set. + + +Image Annotation Image Search +**Model** **R@1** **R@5** **R@10** **Med** _r_ **R@1** **R@5** **R@10** **Med** _r_ +**Flickr8K** +DeViSE (Frome et al. [16]) 4.5 18.1 29.2 26 6.7 21.9 32.7 25 +SDT-RNN (Socher et al. [49]) 9.6 29.8 41.1 16 8.9 29.8 41.1 16 +Kiros et al. [25] 13.5 36.2 45.7 13 10.4 31.0 43.7 14 +Mao et al. [38] 14.5 37.2 48.5 11 11.5 31.0 42.4 15 +DeFrag (Karpathy et al. [24]) 12.6 32.9 44.0 14 9.7 29.6 42.5 15 +Our implementation of DeFrag [24] 13.8 35.8 48.2 10.4 9.5 28.2 40.3 15.6 +Our model: DepTree edges 14.8 37.9 50.0 9.4 11.6 31.4 43.8 13.2 +Our model: BRNN **16.5** **40.6** **54.2** **7.6** **11.8** **32.1** **44.7** **12.4** + + +Table 5. Ranking experiment results for the Flickr8K dataset. + + +Figure 11. Additional examples of alignments. For each query test image above we retrieve the most compatible sentence from the test set +and show the alignments. + + +Figure 12. Additional examples of captions on the level of full images. Green: Human ground truth. Red: Top-scoring sentence from +training set. Blue: Generated sentence. + + +Figure 13. Additional examples of region captions on the test set of Flickr30K. + + diff --git a/alignment-papers-text/1704.00380_Word-Alignment-Based_Segment-Level_Machine_Transla.md b/alignment-papers-text/1704.00380_Word-Alignment-Based_Segment-Level_Machine_Transla.md new file mode 100644 index 0000000000000000000000000000000000000000..293bd9be5e8c0bece1bfcde53705e274e24c5849 --- /dev/null +++ b/alignment-papers-text/1704.00380_Word-Alignment-Based_Segment-Level_Machine_Transla.md @@ -0,0 +1,456 @@ +## **Word-Alignment-Based Segment-Level Machine Translation Evaluation** **using Word Embeddings** + + + +**Junki Matsuo** and **Mamoru Komachi** +Graduate School of System Design, +Tokyo Metropolitan University, Japan +matsuo-junki@ed.tmu.ac.jp, + +komachi@tmu.ac.jp + + +**Abstract** + + +One of the most important problems in +machine translation (MT) evaluation is to +evaluate the similarity between translation +hypotheses with different surface forms +from the reference, especially at the segment level. We propose to use word +embeddings to perform word alignment +for segment-level MT evaluation. We +performed experiments with three types +of alignment methods using word embeddings. We evaluated our proposed +methods with various translation datasets. +Experimental results show that our proposed methods outperform previous word +embeddings-based methods. + + +**1** **Introduction** + + +Automatic evaluation of machine translation (MT) +systems without human intervention has gained +importance. For example, BLEU (Papineni et al., +2002) has improved the MT research in the last +decade. However, BLEU has little correlation +with human judgment on the segment level since +it is originally proposed for system-level evaluation. Segment-level evaluation is crucial for analyzing MT outputs to improve the system accuracy, but there are few studies addressing the issue +of segment-level evaluation of MT outputs. +Another issue in MT evaluation is to evaluate MT hypotheses that are semantically equivalent with different surfaces from the reference. +For instance, BLEU does not consider any words +that do not match the reference at the surface level. METEOR-Universal (Denkowski and +Lavie, 2014) handles word similarities better, + + +_∗_ The last author is currently affiliated with Nara Institute +of Science and Technology, Japan. + + + +**Katsuhito Sudoh** +NTT Communication Science +Laboratories, Japan +sudoh@is.naist.jp _[∗]_ + + +but it uses external resources that require timeconsuming annotations. It is also not as simple +as BLEU and its score is difficult to interpret. +DREEM (Chen and Guo, 2015), another metric +that addresses the issue of word similarity, does +not require human annotations and uses distributed +representations for MT evaluation. It shows higher +accuracy than popular metrics such as BLEU and +METEOR. + +Therefore, we follow the approach of DREEM +to propose a lightweight MT evaluation measure +that employs only a raw corpus as an external resource. We adopt sentence similarity measures +proposed by Song and Roth (2015) for a Semantic +Textual Similarity (STS) task. They use word embeddings to align words so that the sentence similarity score takes near-synonymous expressions +into account and propose three types of heuristics using m:n (average), 1:n (maximum) and 1:1 +(Hungarian) alignments. It has been reported that +sentence similarity calculated with a word alignment based on word embeddings shows high accuracy on STS tasks. + +We evaluated the word-alignment-based sentence similarity for MT evaluation to use the +WMT12, WMT13, and WMT15 datasets of +European–English translation and WAT2015 and +NTCIR8 datasets of Japanese–English translation. +Experimental results confirmed that the maximum alignment similarity outperforms previous +word embeddings-based methods in European– +English translation tasks and the average alignment similarity has the highest human correlation +in Japanese–English translation tasks. + + +**2** **Related Work** + + +Several studies have examined automatic evaluation of MT systems. The de facto standard automatic MT evaluation metrics BLEU + + +(Papineni et al., 2002) may assign inappropriate score to a translation hypothesis that uses +similar but different words because it considers only word n-gram precision (Callison-Burch +et al., 2006). METEOR-Universal (Denkowski +and Lavie, 2014) alleviates the problem of surface mismatch by using a thesaurus and a stemmer +but it needs external resources, such as WordNet. +In this work, we used a distributed word representation to evaluate semantic relatedness between +the hypothesis and reference sentences. This approach has the advantage that it can be implemented only with only a raw monolingual corpus. +To address the problem of word n-gram precision, Wang and Merlo (2016) propose to smooth +it by word embeddings. They also employ maximum alignment between n-grams of hypothesis and reference sentences and a threshold to +cut off n-gram embeddings with low similarity. +Their work is similar to our maximum alignment +similarity method, but they only experimented +in European–English datasets, where maximum +alignment works better than average alignment. +The previous method most similar to ours is +DREEM (Chen and Guo, 2015). It has shown +to achieve state-of-the-art accuracy compared with +popular metrics such as BLEU and METEOR. It +uses various types of representations such as word +and sentence representations. Word representations are trained with a neural network and sentence representations are trained with a recursive +auto-encoder, respectively. DREEM uses cosine +similarity between distributed representations of +hypothesis and reference as a translation evaluation score. Both their and our methods employ +word embeddings to compute sentence similarity +score, but our method differs in the use of alignment and length penalty. As for alignment, we set +a threshold to remove noisy alignments, whereas +they use a hyper-parameter to down-weight overall sentence similarity. As for length penalty, +we compared average, maximum, and Hungarian +alignments to compensate for the difference between the lengths of translation hypothesis and +reference, whereas they use an exponential penalty +to normalize the length. +Another way to improve the robustness of MT +evaluation is to use a character-based model. +CHRF (Popovi´c, 2015) is one such metric that +uses character n-grams. It is a harmonic mean +of character n-gram precision and recall. It works + + + +MASasym( _a, b_ ) = [1] + +_|a|_ + + + +well for morphologically rich languages. We, instead, adopt a word-based approach because our +target language, English, is morphologically simple but etymologically complex. + + +**3** **Word-Alignment-Based Sentence** +**Similarity using Word Embeddings** + + +In this section, we introduce word-alignmentbased sentence similarity (Song and Roth, 2015) +applied as an MT evaluation metrics. Song and +Roth (2015) propose to use word embeddings to +align words in a pair of sentences. Their approach +shows promising results in STS tasks. +In MT evaluation, a word in the source language aligns to either a word or a phrase in the target language; therefore, it is not likely for a word +to align with the whole sentence. Thus, we use +several heuristics to constrain word alignment between the hypothesis and reference sentences. +In the following subsections, we present three +sentence similarity measures. All of them use cosine similarity to calculate word similarity. To +avoid alignment between unrelated words, we cut +off word alignment whose similarity is less than a +threshold value. + + +**3.1** **Average Alignment Similarity** + + +First, the average alignment similarity (AAS) +heuristic aligns a word with multiple words in a +sentence pair. Similarity of words between a hypothesis sentence and a reference sentence is calculated. AAS is given by averaging word similarity scores of all combinations of words in _|x||y|_ . + + + +_|y|_ + +- _φ_ ( _xi, yj_ ) (1) + +_j_ =1 + + + +1 +AAS( _x, y_ ) = +_|x||y|_ + + + +_|x|_ + + + +_i_ =1 + + + +Here, _x_ is a hypothesis and _y_ is a reference; and _xi_ +and _yj_ represent words in each sentence. + + +**3.2** **Maximum Alignment Similarity** + + +Second, we propose the maximum alignment similarity (MAS) heuristic averaging only the word +that has the maximum similarity score of each +aligned word pair. By definition, MAS itself is an +asymmetric score so we symmetrize it by averaging the score in both directions. + + + +_|a|_ + +- max _φ_ ( _ai, bj_ ) (2) + +_j_ +_i_ =1 + + +tence _y_ by the Hungarian method (Kuhn, 1955). + + + +1 +HAS( _x, y_ ) = +min( _|x|, |y|_ ) + + +**4** **Experiment** + + + +_|x|_ + +- _φ_ ( _xi, h_ ( _xi_ )) (4) + +_i_ =1 + + + +Figure 1: Correlation of each word-alignmentbased method with varying the threshold for WMT +datasets. + + +Figure 2: Correlation of each word-alignmentbased method with varying the threshold for +WAT2015 and NTCIR8 datasets. + + + +MAS( _x, y_ ) = [1] + + + +2 [(MAS][asym][(] _[x, y]_ [)+MAS][asym][(] _[y, x]_ [))] + + + +We report the results of MT evaluation in a +European–English translation task of the WMT12, +WMT13, and WMT15 datasets and Japanese– +English task of WAT2015 and NTCIR8 datasets. +For the WMT datasets, we compared our metrics +with BLEU and DREEM taken from the official +score of the WMT15 metric task (Stanojevi´c et al., +2015). For WAT2015 and NTCIR8 datasets, the +three types of proposed methods are compared. + + +**4.1** **Experimental Setting** + + +We used the WMT12, WMT13, and WMT15 +datasets containing a total of 137,007 sentences +in French, Finnish, German, Czech, and Russian +translated to English. As Japanese–English translation datasets, WAT2015 includes 600 sentences +and NTCIR8 includes 1,200 sentences. We measured correlation between human adequacy score +and each of the evaluation metrics. We used +Kendall’s _τ_ for segment-level evaluation. We used +a pre-trained model of word2vec using the Google +News corpus for calculating word similarity using +our proposed methods. [1] + + +**4.2** **Result** + + +Table 1 shows a breakdown of correlation scores +for each language pair in WMT15. MAS shows +the best accuracy among all the proposed metrics +for all language pairs. Its accuracy is better than +that of DREEM for all language pairs except for +Czech–English. This result shows that removal of +noisy word embeddings by either using a threshold or 1:n alignment is important for European– +English datasets. +Figure 1 shows correlation of word-alignmentbased methods for WMT datasets with varying threshold values. For the WMT datasets, +MAS has the highest correlation scores among the +three word-alignment-based methods. A threshold value of 0.2 gives the maximum correlation for +MAS for all WMT datasets. +Figure 2 shows correlation of word-alignmentbased methods for the two Japanese–English + + +[1https://code.google.com/archive/p/](https://code.google.com/archive/p/word2vec/) +[word2vec/](https://code.google.com/archive/p/word2vec/) + + + +(3) +Here, _a_ and _b_ are words in a hypothesis and a reference sentence, respectively. + + + +**3.3** **Hungarian Alignment Similarity** + + +Third, we introduce the Hungarian alignment similarity (HAS) to restrict word alignment to 1:1. +HAS formulates the task of word alignment as bipartite graph matching where the words in a hypothesis and a reference are represented as nodes +whose edges have weight _φ_ ( _xi, yi_ ). One-to-one +word alignment is achieved by calculating maximum alignment of the perfect bipartite graph. For +each word _xi_ included in a hypothesis sentence, +HAS chooses the word _h_ ( _xi_ ) in a reference sen + +|Evaluation Metrics|Fr-En|Fi-En|De-En|Cs-En|Ru-En|Average| +|---|---|---|---|---|---|---| +|Average Alignment Similarity
Maximum Alignment Similarity
Hungarian Alignment Similarity|0.324
**0.368**
0.223|0.247
**0.355**
0.211|0.304
**0.392**
0.259|0.288
0.400
0.251|0.273
**0.349**
0.239|0.287
**0.373**
0.237| +|BLEU (Stanojevi´c et al., 2015)
DREEM (Chen and Guo, 2015)|0.358
0.362|0.308
0.340|0.360
0.368|0.391
**0.423**|0.329
0.348|0.349
0.368| + + +Table 1: Kendall’s _τ_ correlations of automatic evaluation metrics and official human judgements for the +WMT15 dataset. (Fr: French, Fi: Finnish, De: German, Cs: Czech, Ru: Russian, En: English) + +|Evaluation Metrics|WMT12|WMT13|WMT15|WAT2015|NTCIR8| +|---|---|---|---|---|---| +|Average Alignment Similariy
Maximum Alignment Similarity
Hungarian Alignment Similarity|0.211
**0.353**
0.106|0.312
**0.381**
0.272|0.287
**0.373**
0.237|**0.332**
0.235
0.092|**0.343**
0.171
0.075| + + + +Table 2: Kendall’s _τ_ correlations of word-alignment-based methods and the official human judgements +for each dataset. (WMT12, WMT13, and WMT15: European–English datasets, and WAT2015 and +NTCIR8: Japanese–English datasets) + + + +datasets with a varying threshold. Although MAS +has the highest correlation for the WMT datasets, +AAS has the highest correlation for the WAT2015 +and NTCIR8 datasets. +Table 2 describes segment-level correlation results for WMT, WAT2015, and NTCIR8 datasets. +MAS has the highest correlation score for the +WMT datasets, whereas AAS has the highest correlation score for WAT2015 and NTCIR8 datasets. + + +**5** **Discussion** + + +Figure 1 demonstrated that MAS and AAS are +more stable than HAS for European–English +datasets. This may be because it is relatively +easy for the AAS and MAS to perform word +alignment using word embeddings in translation +pairs of similar languages, but HAS suffers from +alignment sparsity more than the other methods. +In European–English translation, all the wordalignment-based methods perform poorly when +using no word embeddings. +Unlike the European–English translation task, +the Japanese–English translation task exhibits a +different tendency. Figure 2 shows the comparison between three types of word-alignment-based +methods for each threshold. This is partly because +word embeddings help evaluating lexically similar +word pairs but fail to model syntactic variations. +Also, we note that in Japanese–English datasets, +AAS achieved the highest correlation. We suppose +that this is because in Japanese–English transla + + +tion, it is difficult to cover all the source information in the target language, resulting in misalignment of inadequate words by HAS and MAS. +Table 2 shows that MAS performs stably on the +WMT datasets. In particular, Kendall’s _τ_ score of +HAS in WMT12 exhibits very low correlation. It +seems that the 1:1 alignment is too strict to calculate sentence similarity in MT evaluation, while +the 1:m (MAS) alignment performs well, possibly +because of the removal of noisy word alignment. +On the other hand, AAS is more stable than MAS +and HAS for WAT2015 and NTCIR8 datasets. As +a rule of thumb, AAS with high threshold values +(0.6–0.9) shows stable high correlation across all +language pairs, but if it is possible to use development data to tune the parameters, MAS with different values of thresholds should be considered. + + +**6** **Conclusion** + + +In this paper, we presented word-alignment-based +MT evaluation metrics using distributed word representations. In our experiments, MAS showed +higher correlation with human evaluation than +other automatic MT metrics such as BLEU and +DREEM for European–English datasets. On the +other hand, for Japanese–English datasets, AAS +showed higher correlation with human evaluation +than other metrics. These results indicate that appropriate word alignment using word embeddings +is helpful in evaluating the MT output. + + +**References** + + +Chris Callison-Burch, Miles Osborne, and Philipp +Koehn. 2006. Re-evaluating the Role of BLEU in +Machine Translation Research. In _Proceedings of_ +_the 11th Conference of the European Chapter of the_ +_Association for Computational Linguistics_ . pages +249–256. + + +Boxing Chen and Hongyu Guo. 2015. Representation +Based Translation Evaluation Metrics. In _Proceed-_ +_ings of the 53rd Annual Meeting of the Association_ +_for Computational Linguistics and the 7th Interna-_ +_tional Joint Conference on Natural Language Pro-_ +_cessing (Volume 2: Short Papers)_ . pages 150–155. + + +Michael Denkowski and Alon Lavie. 2014. Meteor +Universal: Language Specific Translation Evaluation for Any Target Language. In _Proceedings of the_ +_Ninth Workshop on Statistical Machine Translation_ . +pages 376–380. + + +Harold W. Kuhn. 1955. The Hungarian Method for the +Assignment Problem. In _Naval Research Logistics_ +_Quarterly_ . pages 83–97. + + +Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic +Evaluation of Machine Translation. In _Proceed-_ +_ings of the 40th annual meeting on association for_ +_computational linguistics. Association for Computa-_ +_tional Linguistics_ . pages 311–318. + + +Maja Popovi´c. 2015. ChrF: Character n-gram F-score +for Automatic MT Evaluation. In _Proceedings of the_ +_Tenth Workshop on Statistical Machine Translation_ . +pages 392–395. + + +Yangqui Song and Dan Roth. 2015. Unsupervised +Sparse Vector Densification for Short Text Similarity. In _Proceedings of the 2015 Annual Conference_ +_of the North American Chapter of the ACL_ . pages +1275–1280. + + +Miloˇs Stanojevi´c, Amir Kamran, Philipp Koehn, and +Ondˇrej Bojar. 2015. Results of the WMT15 Metrics +Shared Task. In _Proceedings of the Tenth Workshop_ +_on Statistical Machine Translation_ . pages 256–273. + + +Haozhou Wang and Paola Merlo. 2016. Modifications of Machine Translation Evaluation Metrics by +Using Word Embeddings. In _Proceedings of the_ +_Sixth Workshop on Hybrid Approaches to Transla-_ +_tion (HyTra6)_ . pages 33–41. + + diff --git a/alignment-papers-text/1807.03756_Latent_Alignment_and_Variational_Attention.md b/alignment-papers-text/1807.03756_Latent_Alignment_and_Variational_Attention.md new file mode 100644 index 0000000000000000000000000000000000000000..083041f5486eab1523c7ced5250631067be8eb22 --- /dev/null +++ b/alignment-papers-text/1807.03756_Latent_Alignment_and_Variational_Attention.md @@ -0,0 +1,1306 @@ +## **Latent Alignment and Variational Attention** + +**Yuntian Deng** _[∗]_ **Yoon Kim** _[∗]_ **Justin Chiu** **Demi Guo** **Alexander M. Rush** + +``` + {dengyuntian@seas,yoonkim@seas,justinchiu@g,dguo@college,srush@seas}.harvard.edu + +``` + +School of Engineering and Applied Sciences +Harvard University +Cambridge, MA, USA + + +**Abstract** + + +Neural attention has become central to many state-of-the-art models in natural +language processing and related domains. Attention networks are an easy-to-train +and effective method for softly simulating alignment; however, the approach does +not marginalize over latent alignments in a probabilistic sense. This property makes +it difficult to compare attention to other alignment approaches, to compose it with +probabilistic models, and to perform posterior inference conditioned on observed +data. A related latent approach, hard attention, fixes these issues, but is generally +harder to train and less accurate. This work considers _variational attention_ networks, alternatives to soft and hard attention for learning latent variable alignment +models, with tighter approximation bounds based on amortized variational inference. We further propose methods for reducing the variance of gradients to make +these approaches computationally feasible. Experiments show that for machine +translation and visual question answering, inefficient exact latent variable models +outperform standard neural attention, but these gains go away when using hard +attention based training. On the other hand, variational attention retains most of +the performance gain but with training speed comparable to neural attention. + + +**1** **Introduction** + + +Attention networks [6] have quickly become the foundation for state-of-the-art models in natural +language understanding, question answering, speech recognition, image captioning, and more [15, 81, +16, 14, 63, 80, 71, 62]. Alongside components such as residual blocks and long-short term memory +networks, soft attention provides a rich neural network building block for controlling gradient flow +and encoding inductive biases. However, more so than these other components, which are often +treated as black-boxes, researchers use intermediate attention decisions directly as a tool for model +interpretability [43, 1] or as a factor in final predictions [25, 68]. From this perspective, attention +plays the role of a latent alignment variable [10, 37]. An alternative approach, hard attention [80], +makes this connection explicit by introducing a latent variable for alignment and then optimizing a +bound on the log marginal likelihood using policy gradients. This approach generally performs worse +(aside from a few exceptions such as [80]) and is used less frequently than its soft counterpart. + + +Still the latent alignment approach remains appealing for several reasons: (a) latent variables facilitate +reasoning about dependencies in a probabilistically principled way, e.g. allowing composition with +other models, (b) posterior inference provides a better basis for model analysis and partial predictions +than strictly feed-forward models, which have been shown to underperform on alignment in machine +translation [38], and finally (c) directly maximizing marginal likelihood may lead to better results. + + +_∗_ Equal contribution. + + +32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. + + +The aim of this work is to quantify the issues with attention and propose alternatives based on recent +developments in variational inference. While the connection between variational inference and hard +attention has been noted in the literature [4, 41], the space of possible bounds and optimization +methods has not been fully explored and is growing quickly. These tools allow us to better quantify +whether the general underperformance of hard attention models is due to modeling issues (i.e. soft +attention imbues a better inductive bias) or optimization issues. + + + +Our main contribution is a _variational attention_ +approach that can effectively fit latent alignments while remaining tractable to train. We +consider two variants of variational attention: +_categorical_ and _relaxed_ . The categorical method +is fit with amortized variational inference using +a learned inference network and policy gradient +with a soft attention variance reduction baseline. +With an appropriate inference network (which +conditions on the entire source/target), it can be +used at training time as a drop-in replacement +for hard attention. The relaxed version assumes +that the alignment is sampled from a Dirichlet +distribution and hence allows attention over multiple source elements. + + + +Figure 1: Sketch of variational attention applied to +machine translation. Two alignment distributions are +shown, the blue prior _p_, and the red variational posterior +_q_ taking into account future observations. Our aim is to +use _q_ to improve estimates of _p_ and to support improved +inference of _z_ . + + + +_x_ 1: _T_ + + + + + + + + + +_x_ ˜ _y_ 3 + + + +Experiments describe how to implement this + +_q_ taking into account future observations. Our aim is to + +approach for two major attention-based models: + +use _q_ to improve estimates of _p_ and to support improved + +neural machine translation and visual question + +inference of _z_ . + +answering (Figure 1 gives an overview of our +approach for machine translation). We first show +that maximizing exact marginal likelihood can increase performance over soft attention. We further +show that with variational (categorical) attention, alignment variables significantly surpass both +soft and hard attention results without requiring much more difficult training. We further explore +the impact of posterior inference on alignment decisions, and how latent variable models might be +employed. Our code is available at `[https://github.com/harvardnlp/var-attn/](https://github.com/harvardnlp/var-attn/)` . + + +**Related Work** Latent alignment has long been a core problem in NLP, starting with the seminal IBM +models [11], HMM-based alignment models [75], and a fast log-linear reparameterization of the IBM +2 model [20]. Neural soft attention models were originally introduced as an alternative approach +for neural machine translation [6], and have subsequently been successful on a wide range of tasks +(see [15] for a review of applications). Recent work has combined neural attention with traditional +alignment [18, 72] and induced structure/sparsity [48, 33, 44, 85, 54, 55, 49], which can be combined +with the variational approaches outlined in this paper. + + +In contrast to soft attention models, hard attention [80, 3] approaches use a single sample at training +time instead of a distribution. These models have proven much more difficult to train, and existing +works typically treat hard attention as a black-box reinforcement learning problem with log-likelihood +as the reward [80, 3, 53, 26, 19]. Two notable exceptions are [4, 41]: both utilize amortized variational +inference to learn a sampling distribution which is used obtain importance-sampled estimates of the +log marginal likelihood [12]. Our method uses uses different estimators and targets the single sample +approach for efficiency, allowing the method to be employed for NMT and VQA applications. + + +There has also been significant work in using variational autoencoders for language and translation +application. Of particular interest are those that augment an RNN with latent variables (typically +Gaussian) at each time step [17, 22, 66, 23, 40] and those that incorporate latent variables into +sequence-to-sequence models [84, 7, 70, 64]. Our work differs by modeling an explicit model +component (alignment) as a latent variable instead of auxiliary latent variables (e.g. topics). The +term "variational attention" has been used to refer to a different component the output from attention +(commonly called the context vector) as a latent variable [7], or to model both the memory and the +alignment as a latent variable [9]. Finally, there is some parallel work [78, 67] which also performs +exact/approximate marginalization over latent alignments for sequence-to-sequence learning. + + +2 + + +**2** **Background: Latent Alignment and Neural Attention** + + +We begin by introducing notation for latent alignment, and then show how it relates to neural attention. +For clarity, we are careful to use _alignment_ to refer to this probabilistic model (Section 2.1), and _soft_ +and _hard_ attention to refer to two particular inference approaches used in the literature to estimate +alignment models (Section 2.2). + + +**2.1** **Latent Alignment** + + +Figure 2(a) shows a latent alignment model. Let _x_ be an observed set with associated members +_{x_ 1 _, . . ., xi, . . ., xT }_ . Assume these are vector-valued (i.e. _xi ∈_ R _[d]_ ) and can be stacked to form a +matrix _X ∈_ R _[d][×][T]_ . Let the observed ˜ _x_ be an arbitrary “query”. These generate a discrete output +variable _y ∈Y_ . This process is mediated through a latent alignment variable _z_, which indicates +which member (or mixture of members) of _x_ generates _y_ . The generative process we consider is: + + +_z ∼D_ ( _a_ ( _x,_ ˜ _x_ ; _θ_ )) _y ∼_ _f_ ( _x, z_ ; _θ_ ) + + +where _a_ produces the parameters for an alignment distribution _D_ . The function _f_ gives a distribution +over the output, e.g. an exponential family. To fit this model to data, we set the model parameters _θ_ +by maximizing the log marginal likelihood of training examples ( _x,_ ˜ _x,_ ˆ _y_ ): [2] + + +max log _p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ) = max log E _z_ [ _f_ ( _x, z_ ; _θ_ ) _y_ ˆ] +_θ_ _θ_ + + + +Directly maximizing this log marginal likelihood in the presence of the latent variable _z_ +is often difficult due to the expectation (though +tractable in certain cases). + + +For this to represent an alignment, we restrict +the variable _z_ to be in the simplex ∆ _[T][ −]_ [1] over +source indices _{_ 1 _, . . ., T_ _}_ . We consider two distributions for this variable: first, let _D_ be a _cat-_ +_egorical_ where _z_ is a one-hot vector with _zi_ = 1 +if _xi_ is selected. For example, _f_ ( _x, z_ ) could use +_z_ to pick from _x_ and apply a softmax layer to +predict _y_, i.e. _f_ ( _x, z_ ) = softmax( **W** _Xz_ ) and +**W** _∈_ R _[|Y|×][d]_, + + + +(a) + + + + + +(b) + + + + + +Figure 2: Models over observed set _x_, query ˜ _x_, and +alignment _z_ . (a) Latent alignment model, (b) Soft attention with _z_ absorbed into prediction network. + + + +log _p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ) = log + + + +_T_ + + +_p_ ( _zi_ = 1 _| x,_ ˜ _x_ ) _p_ ( _y_ = ˆ _y | x, zi_ = 1) = log E _z_ [softmax( **W** _Xz_ ) _y_ ˆ] + +_i_ =1 + + + +This computation requires a factor of _O_ ( _T_ ) additional runtime, and introduces a major computational +factor into already expensive deep learning models. [3] + + +Second we consider a _relaxed_ alignment where _z_ is a mixture taken from the interior of the simplex by +letting _D_ be a Dirichlet. This objective looks similar to the categorical case, i.e. log _p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ) = +log E _z_ [softmax( **W** _Xz_ ) _y_ ˆ], but the resulting expectation is intractable to compute exactly. + + +**2.2** **Attention Models: Soft and Hard** + + +When training deep learning models with gradient methods, it can be difficult to use latent alignment +directly. As such, two alignment-like approaches are popular: _soft attention_ replaces the probabilistic +model with a deterministic soft function and _hard attention_ trains a latent alignment model by +maximizing a lower bound on the log marginal likelihood (obtained from Jensen’s inequality) with +policy gradient-style training. We briefly describe how these methods fit into this notation. + + +2When clear from context, the random variable is dropped from E[ _·_ ]. We also interchangeably use _p_ (ˆ _y | x,_ ˜ _x_ ) +and _f_ ( _x, z_ ; _θ_ ) _y_ ˆ to denote _p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ). +3Although not our main focus, explicit marginalization is sometimes tractable with efficient matrix operations +on modern hardware, and we compare the variational approach to explicit enumeration in the experiments. In +some cases it is also possible to efficiently perform exact marginalization with dynamic programming if one +imposes additional constraints (e.g. monotonicity) on the alignment distribution [83, 82, 58]. + + +3 + + +**Soft Attention** Soft attention networks use an altered model shown in Figure 2b. Instead of using a +latent variable, they employ a deterministic network to compute an expectation over the alignment +variable. We can write this model using the same functions _f_ and _a_ from above, + + +log _p_ soft( _y | x,_ ˜ _x_ ) = log _f_ ( _x,_ E _z_ [ _z_ ]; _θ_ ) = log softmax( **W** _X_ E _z_ [ _z_ ]) + + +A major benefit of soft attention is efficiency. Instead of paying a multiplicative penalty of _O_ ( _T_ ) +or requiring integration, the soft attention model can compute the expectation before _f_ . While +formally a different model, soft attention has been described as an approximation of alignment [80]. +Since E[ _z_ ] _∈_ ∆ _[T][ −]_ [1], soft attention uses a convex combination of the input representations _X_ E[ _z_ ] +(the _context vector_ ) to obtain a distribution over the output. While also a “relaxed” decision, this +expression differs from both the latent alignment models above. Depending on _f_, the gap between +E[ _f_ ( _x, z_ )] and _f_ ( _x,_ E[ _z_ ]) may be large. + + +However there are some important special cases. In the case where _p_ ( _z | x,_ ˜ _x_ ) is deterministic, we +have E[ _f_ ( _x, z_ )] = _f_ ( _x,_ E[ _z_ ]), and _p_ ( _y | x,_ ˜ _x_ ) = _p_ soft( _y | x,_ ˜ _x_ ). In general we can bound the absolute +difference based on the maximum curvature of _f_, as shown by the following proposition. +**Proposition 1.** _Define gx,y_ ˆ : ∆ _[T][ −]_ [1] _�→_ [0 _,_ 1] _to be the function given by gx,y_ ˆ( _z_ ) = _f_ ( _x, z_ ) _y_ ˆ _(i.e._ +_gx,y_ ˆ( _z_ ) = _p_ ( _y_ = ˆ _y | x,_ ˜ _x, z_ )) _for a twice differentiable function f_ _. Let Hgx,y_ ˆ( _z_ ) _be the Hessian of_ +_gx,y_ ˆ( _z_ ) _evaluated at z, and further suppose ∥Hgx,y_ ˆ( _z_ ) _∥_ 2 _≤_ _c for all z ∈_ ∆ _[T][ −]_ [1] _,_ ˆ _y ∈Y, and x, where_ +_∥· ∥_ 2 _is the spectral norm. Then for all_ ˆ _y ∈Y,_ + + +_| p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ) _−_ _p_ soft( _y_ = ˆ _y | x,_ ˜ _x_ ) _| ≤_ _c_ + + +The proof is given in Appendix A. [4] Empirically the soft approximation works remarkably well, and +often moves towards a sharper distribution with training. Alignment distributions learned this way +often correlate with human intuition (e.g. word alignment in machine translation) [38]. [5] + + +**Hard Attention** Hard attention is an approximate inference approach for latent alignment (Figure 2a) [80, 4, 53, 26]. Hard attention takes a single hard sample of _z_ (as opposed to a soft mixture) +and then backpropagates through the model. The approach is derived by two choices: First apply Jensen’s inequality to get a lower bound on the log marginal likelihood, log E _z_ [ _p_ ( _y | x, z_ )] _≥_ +E _z_ [log _p_ ( _y | x, z_ )], then maximize this lower-bound with policy gradients/REINFORCE [76] to obtain +unbiased gradient estimates, + + +_∇θ_ E _z_ [log _f_ ( _x, z_ ))] = E _z_ [ _∇θ_ log _f_ ( _x, z_ ) + (log _f_ ( _x, z_ ) _−_ _B_ ) _∇θ_ log _p_ ( _z | x,_ ˜ _x_ )] _,_ + + +where _B_ is a baseline that can be used to reduce the variance of this estimator. To implement this +approach efficiently, hard attention uses Monte Carlo sampling to estimate the expectation in the +gradient computation. For efficiency, a single sample from _p_ ( _z | x,_ ˜ _x_ ) is used, in conjunction with +other tricks to reduce the variance of the gradient estimator (discussed more below) [80, 50, 51]. + + +**3** **Variational Attention for Latent Alignment Models** + + +Amortized variational inference (AVI, closely related to variational auto-encoders) [36, 61, 50] is a +class of methods to efficiently approximate latent variable inference, using learned inference networks. +In this section we explore this technique for deep latent alignment models, and propose methods for +_variational attention_ that combine the benefits of soft and hard attention. + + +First note that the key approximation step in hard attention is to optimize a lower bound derived from +Jensen’s inequality. This gap could be quite large, contributing to poor performance. [6] Variational + + +4It is also possible to study the gap in finer detail by considering distributions over the inputs of _f_ that have +high probability under approximately linear regions of _f_, leading to the notion of _approximately expectation-_ +_linear_ functions, which was originally proposed and studied in the context of dropout [46]. +5Another way of viewing soft attention is as simply a non-probabilistic learned function. While it is possible +that such models encode better inductive biases, our experiments show that when properly optimized, latent +alignment attention with explicit latent variables do outperform soft attention. +6Prior works on hard attention have generally approached the problem as a black-box reinforcement learning +problem where the rewards are given by log _f_ ( _x, z_ ). Ba et al. (2015) [4] and Lawson et al. (2017) [41] are +the notable exceptions, and both works utilize the framework from [51] which obtains multiple samples from a +learned sampling distribution to optimize the IWAE bound [12] or a reweighted wake-sleep objective. + + +4 + + +**Algorithm 1** Variational Attention + +_λ ←_ enc( _x,_ ˜ _x, y_ ; _φ_ ) _▷_ _Compute var. params_ +_z ∼_ _q_ ( _z_ ; _λ_ ) _▷_ _Sample var. attention_ +log _f_ ( _x, z_ ) _▷_ Compute output dist +_z_ _[′]_ _←_ E _p_ ( _z′ | x,x_ ˜)[ _z_ _[′]_ ] _▷_ Compute soft atten. +_B_ = log _f_ ( _x, z_ _[′]_ ) _▷_ Compute baseline dist +Backprop _∇θ_ and _∇φ_ based on eq. 1 and KL + + + +**Algorithm 2** Variational Relaxed Attention + +max _θ_ E _z∼p_ [log _p_ ( _y | x, z_ )] _▷_ _Pretrain fixed θ_ +_. . ._ +_u ∼U_ _▷_ _Sample unparam._ +_z ←_ _gφ_ ( _u_ ) _▷_ _Reparam sample_ +log _f_ ( _x, z_ ) _▷_ Compute output dist +Backprop _∇θ_ and _∇φ_, reparam and KL + + + +inference methods directly aim to tighten this gap. In particular, the _evidence lower bound_ (ELBO) +is a parameterized bound over a family of distributions _q_ ( _z_ ) _∈Q_ (with the constraint that the +supp _q_ ( _z_ ) _⊆_ supp _p_ ( _z | x,_ ˜ _x, y_ )), + + +log E _z∼p_ ( _z | x,x_ ˜)[ _p_ ( _y | x, z_ )] _≥_ E _z∼q_ ( _z_ )[log _p_ ( _y | x, z_ )] _−_ KL[ _q_ ( _z_ ) _∥_ _p_ ( _z | x,_ ˜ _x_ )] + + +This allows us to search over variational distributions _q_ to improve the bound. It is tight when the +variational distribution is equal to the posterior, i.e. _q_ ( _z_ ) = _p_ ( _z | x,_ ˜ _x, y_ ). Hard attention is a special +case of the ELBO with _q_ ( _z_ ) = _p_ ( _z | x,_ ˜ _x_ ). + + +There are many ways to optimize the evidence lower bound; an effective choice for deep learning +applications is to use _amortized variational inference_ . AVI uses an _inference network_ to produce the +parameters of the variational distribution _q_ ( _z_ ; _λ_ ). The inference network takes in the input, query, +and the output, i.e. _λ_ = _enc_ ( _x,_ ˜ _x, y_ ; _φ_ ). The objective aims to reduce the gap with the inference +network _φ_ while also training the generative model _θ_, + +max _φ,θ_ [E] _[z][∼][q]_ [(] _[z]_ [;] _[λ]_ [)][[log] _[ p]_ [(] _[y][ |][ x, z]_ [)]] _[ −]_ [KL[] _[q]_ [(] _[z]_ [;] _[ λ]_ [)] _[ ∥]_ _[p]_ [(] _[z][ |][ x,]_ [ ˜] _[x]_ [)]] + + +With the right choice of optimization strategy and inference network this form of variational attention +can provide a general method for learning latent alignment models. In the rest of this section, we +consider strategies for accurately and efficiently computing this objective; in the next section, we +describe instantiations of _enc_ for specific domains. + + +**Algorithm 1: Categorical Alignments** First consider the case where _D_, the alignment distribution, +and _Q_, the variational family, are categorical distributions. Here the generative assumption is that +_y_ is generated from a single index of _x_ . Under this setup, a low-variance estimator of _∇θ_ ELBO, is +easily obtained through a single sample from _q_ ( _z_ ). For _∇φ_ ELBO, the gradient with respect to the +KL portion is easily computable, but there is an optimization issue with the gradient with respect to +the first term E _z∼q_ ( _z_ )[log _f_ ( _x, z_ ))]. + + +Many recent methods target this issue, including neural estimates of baselines [50, 51], RaoBlackwellization [59], reparameterizable relaxations [31, 47], and a mix of various techniques + +[73, 24]. We found that an approach using REINFORCE [76] along with a specialized baseline was +effective. However, note that REINFORCE is only one of the inference choices we can select, and +as we will show later, alternative approaches such as reparameterizable relaxations work as well. +Formally, we first apply the likelihood-ratio trick to obtain an expression for the gradient with respect +to the inference network parameters _φ_, + + +_∇φ_ E _z∼q_ ( _z_ )[log _p_ ( _y | x, z_ )] = E _z∼q_ ( _z_ )[(log _f_ ( _x, z_ ) _−_ _B_ ) _∇φ_ log _q_ ( _z_ )] + + +As with hard attention, we take a single Monte Carlo sample (now drawn from the variational +distribution). Variance reduction of this estimate falls to the baseline term _B_ . The ideal (and intuitive) +baseline would be E _z∼q_ ( _z_ )[log _f_ ( _x, z_ )], analogous to the value function in reinforcement learning. +While this term cannot be easily computed, there is a natural, cheap approximation: soft attention (i.e. +log _f_ ( _x,_ E[ _z_ ])). Then the gradient is + + + + +- _∇φ_ log _q_ ( _z | x,_ ˜ _x_ ) (1) + + + +E _z∼q_ ( _z_ ) + + + +�� _f_ ( _x, z_ ) +log +_f_ ( _x,_ E _z′∼p_ ( _z′ | x,x_ ˜)[ _z_ _[′]_ ]) + + + +Effectively this weights gradients to _q_ based on the ratio of the inference network alignment approach +to a soft attention baseline. Notably the expectation in the soft attention is over _p_ (and not over _q_ ), +and therefore the baseline is constant with respect to _φ_ . Note that a similar baseline can also be used +for hard attention, and we apply it to both variational/hard attention models in our experiments. + + +5 + + +**Algorithm 2: Relaxed Alignments** Next consider treating both _D_ and _Q_ as Dirichlets, where _z_ +represents a mixture of indices. This model is in some sense closer to the soft attention formulation +which assigns mass to multiple indices, though fundamentally different in that we still formally treat +alignment as a latent variable. Again the aim is to find a low variance gradient estimator. Instead of +using REINFORCE, certain continuous distributions allow the use reparameterization [36], where +sampling _z ∼_ _q_ ( _z_ ) can be done by first sampling from a simple unparameterized distribution _U_, and +then applying a transformation _gφ_ ( _·_ ), yielding an unbiased estimator, + + +E _u∼U_ [ _∇φ_ log _p_ ( _y|x, gφ_ ( _u_ ))] _−∇φ_ KL [ _q_ ( _z_ ) _∥_ _p_ ( _z | x,_ ˜ _x_ )] + + +The Dirichlet distribution is not directly reparameterizable. While transforming the standard uniform +distribution with the inverse CDF of Dirichlet would result in a Dirichlet distribution, the inverse +CDF does not have an analytical solution. However, we can use rejection based sampling to get a +sample, and employ implicit differentiation to estimate the gradient of the CDF [32]. + + +Empirically, we found the random initialization would result in convergence to uniform Dirichlet +parameters for _λ_ . (We suspect that it is easier to find low KL local optima towards the center of the +simplex). In experiments, we therefore initialize the latent alignment model by first minimizing the +Jensen bound, E _z∼p_ ( _z | x,x_ ˜)[log _p_ ( _y | x, z_ )], and then introducing the inference network. + + +**4** **Models and Methods** + + +We experiment with variational attention in two different domains where attention-based models are +essential and widely-used: neural machine translation and visual question answering. + + +**Neural Machine Translation** Neural machine translation (NMT) takes in a source sentence and +predicts each word of a target sentence _yj_ in an auto-regressive manner. The model first contextually +embeds each source word using a bidirectional LSTM to produce the vectors _x_ 1 _. . . xT_ . The query +_x_ ˜ consists of an LSTM-based representation of the previous target words _y_ 1: _j−_ 1. Attention is used +to identify which source positions should be used to predict the target. The parameters of _D_ are +generated from an MLP between the query and source [6], and _f_ concatenates the selected _xi_ with +the query ˜ _x_ and passes it to an MLP to produce the distribution over the next target word _yj_ . + + +For variational attention, the inference network applies a bidirectional LSTM over the source and +the target to obtain the hidden states _x_ 1 _, . . ., xT_ and _h_ 1 _, . . ., hS_, and produces the alignment scores +at the _j_ -th time step via a bilinear map, _s_ [(] _i_ _[j]_ [)] = exp( _h_ _[⊤]_ _j_ **[U]** _[x][i]_ [)][. For the categorical case, the scores] + +are normalized, _q_ ( _zi_ [(] _[j]_ [)] = 1) _∝_ _s_ [(] _i_ _[j]_ [)][; in the relaxed case the parameters of the Dirichlet are] _[ α]_ _i_ [(] _[j]_ [)] = +_si_ [(] _[j]_ [)][. Note, the inference network sees the entire target (through bidirectional LSTMs). The word] +embeddings are shared between the generative/inference networks, but other parameters are separate. + + +**Visual Question Answering** Visual question answering (VQA) uses attention to locate the parts of +an image that are necessary to answer a textual question. We follow the recently-proposed “bottom-up +top-down” attention approach [2], which uses Faster R-CNN [60] to obtain object bounding boxes +and performs mean-pooling over the convolutional features (from a pretrained ResNet-101 [27]) in +each bounding box to obtain object representations _x_ 1 _, . . ., xT_ . The query ˜ _x_ is obtained by running +an LSTM over the question, the attention function _a_ passes the query and the object representation +through an MLP. The prediction function _f_ is also similar to the NMT case: we concatenate the +chosen _xi_ with the query ˜ _x_ to use as input to an MLP which produces a distribution over the output. +The inference network _enc_ uses the answer embedding _hy_ and combines it with _xi_ and ˜ _x_ to produce +the variational (categorical) distribution, + + +_q_ ( _zi_ = 1) _∝_ exp( _u_ _[⊤]_ tanh( **U** 1( _xi ⊙_ ReLU( **V** 1 _hy_ )) + **U** 2(˜ _x ⊙_ ReLU( **V** 2 _hy_ )))) + + +where _⊙_ is the element-wise product. This parameterization worked better than alternatives. We did +not experiment with the relaxed case in VQA, as the object bounding boxes already give us the ability +to attend to larger portions of the image. + + +**Inference Alternatives** For categorical alignments we described maximizing a particular variational lower bound with REINFORCE. Note that other alternatives exist, and we briefly discuss them + + +6 + + +here: 1) instead of the single-sample variational bound we can use a multiple-sample importance +sampling based approach such as Reweighted Wake-Sleep (RWS) [4] or VIMCO [52]; 2) instead of +REINFORCE we can approximate sampling from the discrete categorical distribution with GumbelSoftmax [30]; 3) instead of using an inference network we can directly apply Stochastic Variational +Inference (SVI) [28] to learn the local variational parameters in the posterior. + + +**Predictive Inference** At test time, we need to marginalize out the latent variables, i.e. +E _z_ [ _p_ ( _y | x,_ ˜ _x, z_ )] using _p_ ( _z | x,_ ˜ _x_ ). In the categorical case, if speed is not an issue then enumerating alignments is preferable, which incurs a multiplicative cost of _O_ ( _T_ ) (but the enumeration is +parallelizable). Alternatively we experimented with a _K_ -max renormalization, where we only take +the top- _K_ attention scores to approximate the attention distribution (by re-normalizing). This makes +the multiplicative cost constant with respect to _T_ . For the relaxed case, sampling is necessary. + + +**5** **Experiments** + + +**Setup** For NMT we mainly use the IWSLT dataset [13]. This dataset is relatively small, but has +become a standard benchmark for experimental NMT models. We follow the same preprocessing as +in [21] with the same Byte Pair Encoding vocabulary of 14k tokens [65]. To show that variational +attention scales to large datasets, we also experiment on the WMT 2017 English-German dataset [8], +following the preprocessing in [74] except that we use newstest2017 as our test set. For VQA, we use +the VQA 2.0 dataset. As we are interested in intrinsic evaluation (i.e. log-likelihood) in addition to +the standard VQA metric, we randomly select half of the standard validation set as the test set (since +we need access to the actual labels). [7] (Therefore the numbers provided are not strictly comparable to +existing work.) While the preprocessing is the same as [2], our numbers are worse than previously +reported as we do not apply any of the commonly-utilized techniques to improve performance on +VQA such as data augmentation and label smoothing. + + +Experiments vary three components of the systems: (a) training objective and model, (b) training +approximations, comparing enumeration or sampling, [8] (c) test inference. All neural models have the +same architecture and the exact same number of parameters _θ_ (the inference network parameters _φ_ +vary, but are not used at test). When training hard and variational attention with sampling both use +the same baseline, i.e the output from soft attention. The full architectures/hyperparameters for both +NMT and VQA are given in Appendix B. + + +**Results and Discussion** Table 1 shows the main results. We first note that hard attention underperforms soft attention, even when its expectation is enumerated. This indicates that Jensen’s inequality +alone is a poor bound. On the other hand, on both experiments, exact marginal likelihood outperforms +soft attention, indicating that when possible it is better to have latent alignments. + + +For NMT, on the IWSLT 2014 German-English task, variational attention with enumeration and +sampling performs comparably to optimizing the log marginal likelihood, despite the fact that it is +optimizing a lower bound. We believe that this is due to the use of _q_ ( _z_ ), which conditions on the +entire source/target and therefore potentially provides better training signal to _p_ ( _z | x,_ ˜ _x_ ) through the +KL term. Note that it is also possible to have _q_ ( _z_ ) come from a pretrained external model, such as +a traditional alignment model [20]. Table 3 (left) shows these results in context compared to the +best reported values for this task. Even with sampling, our system improves on the state-of-the-art. +On the larger WMT 2017 English-German task, the superior performance of variational attention +persists: our baseline soft attention reaches 24.10 BLEU score, while variational attention reaches +24.98. Note that this only reflects a reasonable setting without exhaustive tuning, yet we show that +we can train variational attention at scale. For VQA the trend is largely similar, and results for NLL +with variational attention improve on soft attention and hard attention. However the task-specific +evaluation metrics are slightly worse. + + +Table 2 (left) considers test inference for variational attention, comparing enumeration to _K_ -max with +_K_ = 5. For all methods exact enumeration is better, however _K_ -max is a reasonable approximation. + + +7 VQA eval metric is defined as min _{_ # humans that said answer3 _,_ 1 _}_ . Also note that since there are sometimes + +multiple answers for a given question, in such cases we sample (where the sampling probability is proportional +to the number of humans that said the answer) to get a single label. +8Note that enumeration does not imply exact if we are enumerating an expectation on a lower bound. + + +7 + + +NMT VQA +Model Objective E PPL BLEU NLL Eval + + +Soft Attention log _p_ ( _y |_ E[ _z_ ]) - 7.17 32.77 1.76 58.93 +Marginal Likelihood log E[ _p_ ] Enum 6.34 33.29 1.69 60.33 +Hard Attention E _p_ [log _p_ ] Enum 7.37 31.40 1.78 57.60 +Hard Attention E _p_ [log _p_ ] Sample 7.38 31.00 1.82 56.30 +Variational Relaxed Attention E _q_ [log _p_ ] _−_ KL Sample 7.58 30.05 - Variational Attention E _q_ [log _p_ ] _−_ KL Enum 6.08 33.68 1.69 58.44 +Variational Attention E _q_ [log _p_ ] _−_ KL Sample 6.17 33.30 1.75 57.52 + + +Table 1: Evaluation on NMT and VQA for the various models. E column indicates whether the expectation +is calculated via enumeration (Enum) or a single sample (Sample) during training. For NMT we evaluate +intrinsically on perplexity (PPL) (lower is better) and extrinsically on BLEU (higher is better), where for BLEU +we perform beam search with beam size 10 and length penalty (see Appendix B for further details). For VQA +we evaluate intrinsically on negative log-likelihood (NLL) (lower is better) and extrinsically on VQA evaluation + + +PPL BLEU +Model Exact _K_ -Max Exact _K_ -Max + + +Marginal Likelihood 6.34 6.90 33.29 33.31 +Hard + Enum 7.37 7.37 31.40 31.37 +Hard + Sample 7.38 7.38 31.00 31.04 +Variational + Enum 6.08 6.42 33.68 33.69 +Variational + Sample 6.17 6.51 33.30 33.27 + + +Table 2: (Left) Performance change on NMT from exact decoding to _K_ -Max decoding with _K_ = 5. (see section +5 for definition of K-max decoding). (Right) Test perplexity of different approaches while varying _K_ to estimate +E _z_ [ _p_ ( _y|x,_ ˜ _x_ )]. Dotted lines compare soft baseline and variational with full enumeration. + + +Table 2 (right) shows the PPL of different models as we increase _K_ . Good performance requires +_K >_ 1, but we only get marginal benefits for _K >_ 5. Finally, we observe that it is possible to _train_ +with soft attention and _test_ using _K_ -Max with a small performance drop ( `Soft KMax` in Table 2 +(right)). This possibly indicates that soft attention models are approximating latent alignment models. +On the other hand, training with latent alignments and testing with soft attention performed badly. + + +Table 3 (lower right) looks at the entropy of the prior distribution learned by the different models. +Note that hard attention has very low entropy (high certainty) whereas soft attention is quite high. +The variational attention model falls in between. Figure 3 (left) illustrates the difference in practice. + + +Table 3 (upper right) compares inference alternatives for variational attention. RWS reaches a +comparable performance as REINFORCE, but at a higher memory cost as it requires multiple +samples. Gumbel-Softmax reaches nearly the same performance and seems like a viable alternative; +although we found its performance is sensitive to its temperature parameter. We also trained a +non-amortized SVI model, but found that at similar runtime it was not able to produce satisfactory +results, likely due to insufficient updates of the local variational parameters. A hybrid method such as +semi-amortized inference [39, 34] might be a potential future direction worth exploring. + + +Despite extensive experiments, we found that variational relaxed attention performed worse than other +methods. In particular we found that when training with a Dirichlet KL, it is hard to reach low-entropy +regions of the simplex, and the attentions are more uniform than either soft or variational categorical +attention. Table 3 (lower right) quantifies this issue. We experimented with other distributions such +as Logistic-Normal and Gumbel-Softmax [31, 47] but neither fixed this issue. Others have also noted +difficulty in training Dirichlet models with amortized inference [69]. + + +Besides performance, an advantage of these models is the ability to perform posterior inference, since +the _q_ function can be used directly to obtain posterior alignments. Contrast this with hard attention +where _q_ = _p_ ( _z | x,_ ˜ _x_ ), i.e. the variational posterior is independent of the future information. Figure 3 +shows the alignments of _p_ and _q_ for variational attention over a fixed sentence (see Appendix C for +more examples). We see that _q_ is able to use future information to correct alignments. We note that +the inability of soft and hard attention to produce good alignments has been noted as a major issue +in NMT [38]. While _q_ is not used directly in left-to-right NMT decoding, it could be employed for +other applications such as in an iterative refinement approach [56, 42]. + + +8 + + +Figure 3: (Left) An example demonstrating the difference between the prior alignment (red) and the variational +posterior (blue) when translating from DE-EN (left-to-right). Note the improved blue alignments for `actually` +and `violent` which benefit from seeing the next word. (Right) Comparison of soft attention (green) with the _p_ +of variational attention (red). Both models imply a similar alignment, but variational attention has lower entropy. + + +Inference Method #Samples PPL BLEU + + + +IWSLT +Model BLEU + + +Beam Search Optimization [77] 26.36 +Actor-Critic [5] 28.53 +Neural PBMT + LM [29] 30.08 +Minimum Risk Training [21] 32.84 + + +Soft Attention 32.77 +Marginal Likelihood 33.29 +Hard Attention + Enum 31.40 +Hard Attention + Sample 30.42 +Variational Relaxed Attention 30.05 +Variational Attention + Enum 33.69 +Variational Attention + Sample 33.30 + + + +REINFORCE 1 6.17 33.30 +RWS 5 6.41 32.96 +Gumbel-Softmax 1 6.51 33.08 + + +Entropy +Model NMT VQA + + +Soft Attention 1.24 2.70 +Marginal Likelihood 0.82 2.66 +Hard Attention + Enum 0.05 0.73 +Hard Attention + Sample 0.07 0.58 +Variational Relaxed Attention 2.02 Variational Attention + Enum 0.54 2.07 +Variational Attention + Sample 0.52 2.44 + + + +Table 3: (Left) Comparison against the best prior work for NMT on the IWSLT 2014 German-English test set. +(Upper Right) Comparison of inference alternatives of variational attention on IWSLT 2014. (Lower Right) +Comparison of different models in terms of implied discrete entropy (lower = more certain alignment). + + +**Potential Limitations** While this technique is a promising alternative to soft attention, there are +some practical limitations: (a) Variational/hard attention needs a good baseline estimator in the form +of soft attention. We found this to be a necessary component for adequately training the system. This +may prevent this technique from working when _T_ is intractably large and soft attention is not an +option. (b) For some applications, the model relies heavily on having a good posterior estimator. In +VQA we had to utilize domain structure for the inference network construction. (c) Recent models +such as the Transformer [74], utilize many repeated attention models. For instance the current best +translation models have the equivalent of 150 different attention queries per word translated. It is +unclear if this approach can be used at that scale as predictive inference becomes combinatorial. + + +**6** **Conclusion** + + +Attention methods are ubiquitous tool for areas like natural language processing; however they +are difficult to use as latent variable models. This work explores alternative approaches to latent +alignment, through variational attention with promising result. Future work will experiment with +scaling the method on larger-scale tasks and in more complex models, such as multi-hop attention +models, transformer models, and structured models, as well as utilizing these latent variables for +interpretability and as a way to incorporate prior knowledge. + + +9 + + +**Acknowledgements** + + +We are grateful to Sam Wiseman and Rachit Singh for insightful comments and discussion, as well as +Christian Puhrsch for help with translations. This project was supported by a Facebook Research +Award (Low Resource NMT). YK is supported by a Google AI PhD Fellowship. YD is supported by +a Bloomberg Research Award. AMR gratefully acknowledges the support of NSF CCF-1704834 and +an Amazon AWS Research award. + + +**References** + + +[1] David Alvarez-Melis and Tommi S Jaakkola. A Causal Framework for Explaining the Predictions of +Black-Box Sequence-to-Sequence Models. In _Proceddings of EMNLP_, 2017. + + +[2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei +Zhang. Bottom-up and Top-Down Attention for Image Captioning and Visual Question Answering. In +_Proceedings of CVPR_, 2018. + + +[3] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple Object Recognition with Visual Attention. +In _Proceedings of ICLR_, 2015. + + +[4] Jimmy Ba, Ruslan R Salakhutdinov, Roger B Grosse, and Brendan J Frey. Learning Wake-Sleep Recurrent +Attention Models. In _Proceedings of NIPS_, 2015. + + +[5] Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron +Courville, and Yoshua Bengio. An Actor-Critic Algorithm for Sequence Prediction. In _Proceedings of_ +_ICLR_, 2017. + + +[6] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning +to Align and Translate. In _Proceedings of ICLR_, 2015. + + +[7] Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, and Pascal Poupart. Variational Attention for Sequenceto-Sequence Models. _arXiv:1712.08207_, 2017. + + +[8] Ondˇrej Bojar, Christian Buck, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, +Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, and Julia Kreutzer. Proceedings of the second +conference on machine translation. In _Proceedings of the Second Conference on Machine Translation_ . +Association for Computational Linguistics, 2017. + + +[9] Jorg Bornschein, Andriy Mnih, Daniel Zoran, and Danilo J. Rezende. Variational Memory Addressing in +Generative Models. In _Proceedings of NIPS_, 2017. + + +[10] Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. The Mathematics of +Statistical Machine Translation: Parameter Estimation. _Computational linguistics_, 19(2):263–311, 1993. + + +[11] Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. The mathematics +of statistical machine translation: Parameter estimation. _Comput. Linguist._, 19(2):263–311, June 1993. + + +[12] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. In _Proceedings_ +_of ICLR_, 2015. + + +[13] Mauro Cettolo, Jan Niehues, Sebastian Stuker, Luisa Bentivogli, and Marcello Federico. Report on the +11th IWSLT evaluation campaign. In _Proceedings of IWSLT_, 2014. + + +[14] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, Attend and Spell. _arXiv:1508.01211_, +2015. + + +[15] Kyunghyun Cho, Aaron Courville, and Yoshua Bengio. Describing Multimedia Content using Attentionbased Encoder-Decoder Networks. In _IEEE Transactions on Multimedia_, 2015. + + +[16] Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. AttentionBased Models for Speech Recognition. In _Proceedings of NIPS_, 2015. + + +[17] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, and Yoshua Bengio. A +Recurrent Latent Variable Model for Sequential Data. In _Proceedings of NIPS_, 2015. + + +[18] Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza +Haffari. Incorporating Structural Alignment Biases into an Attentional Neural Translation Model. In +_Proceedings of NAACL_, 2016. + + +10 + + +[19] Yuntian Deng, Anssi Kanervisto, Jeffrey Ling, and Alexander M Rush. Image-to-Markup Generation with +Coarse-to-Fine Attention. In _Proceedings of ICML_, 2017. + + +[20] Chris Dyer, Victor Chahuneau, and Noah A. Smith. A Simple, Fast, and Effective Reparameterization of +IBM Model 2. In _Proceedings of NAACL_, 2013. + + +[21] Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. Classical Structured +Prediction Losses for Sequence to Sequence Learning. In _Proceedings of NAACL_, 2018. + + +[22] Marco Fraccaro, Soren Kaae Sonderby, Ulrich Paquet, and Ole Winther. Sequential Neural Models with +Stochastic Layers. In _Proceedings of NIPS_, 2016. + + +[23] Anirudh Goyal, Alessandro Sordoni, Marc-Alexandre Cote, Nan Rosemary Ke, and Yoshua Bengio. +Z-Forcing: Training Stochastic Recurrent Networks. In _Proceedings of NIPS_, 2017. + + +[24] Will Grathwohl, Dami Choi, Yuhuai Wu, Geoffrey Roeder, and David Duvenaud. Backpropagation through +the Void: Optimizing control variates for black-box gradient estimation. In _Proceedings of ICLR_, 2018. + + +[25] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. Incorporating Copying Mechanism in Sequence-toSequence Learning. 2016. + + +[26] Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic Neural Turing Machine +with Soft and Hard Addressing Schemes. _arXiv:1607.00036_, 2016. + + +[27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. +In _Proceedings of CVPR_, 2016. + + +[28] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. _The_ +_Journal of Machine Learning Research_, 14(1):1303–1347, 2013. + + +[29] Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. Towards neural phrase-based +machine translation. In _Proceedings of ICLR_, 2018. + + +[30] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. _arXiv_ +_preprint arXiv:1611.01144_, 2016. + + +[31] Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with Gumbel-Softmax. In +_Proceedings of ICLR_, 2017. + + +[32] Martin Jankowiak and Fritz Obermeyer. Pathwise Derivatives Beyond the Reparameterization Trick. In +_Proceedings of ICML_, 2018. + + +[33] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured Attention Networks. In +_Proceedings of ICLR_, 2017. + + +[34] Yoon Kim, Sam Wiseman, Andrew C Miller, David Sontag, and Alexander M Rush. Semi-amortized +variational autoencoders. _arXiv preprint arXiv:1802.02550_, 2018. + + +[35] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In _Proceedings of_ +_ICLR_, 2015. + + +[36] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In _Proceedings of ICLR_, 2014. + + +[37] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, +Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. Moses: Open source toolkit for statistical +machine translation. In _Proceedings of the 45th annual meeting of the ACL on interactive poster and_ +_demonstration sessions_, pages 177–180. Association for Computational Linguistics, 2007. + + +[38] Philipp Koehn and Rebecca Knowles. Six Challenges for Neural Machine Translation. _arXiv:1706.03872_, +2017. + + +[39] Rahul G. Krishnan, Dawen Liang, and Matthew Hoffman. On the Challenges of Learning with Inference +Networks on Sparse, High-dimensional Data. In _Proceedings of AISTATS_, 2018. + + +[40] Rahul G. Krishnan, Uri Shalit, and David Sontag. Structured Inference Networks for Nonlinear State +Space Models. In _Proceedings of AAAI_, 2017. + + +[41] Dieterich Lawson, Chung-Cheng Chiu, George Tucker, Colin Raffel, Kevin Swersky, and Navdeep Jaitly. +Learning Hard Alignments in Variational Inference. In _Proceedings of ICASSP_, 2018. + + +11 + + +[42] Jason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic Non-Autoregressive Neural Sequence +Modeling by Iterative Refinement. _arXiv:1802.06901_, 2018. + + +[43] Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing Neural Rredictions. In _Proceedings of_ +_EMNLP_, 2016. + + +[44] Yang Liu and Mirella Lapata. Learning Structured Text Representations. In _Proceedings of TACL_, 2017. + + +[45] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective Approaches to Attention-based +Neural Machine Translation. In _Proceedings of EMNLP_, 2015. + + +[46] Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, and Eduard Hovy. Dropout with +Expectation-linear Regularization. In _Proceedings of ICLR_, 2017. + + +[47] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous Relaxation +of Discrete Random Variables. In _Proceedings of ICLR_, 2017. + + +[48] André F. T. Martins and Ramón Fernandez Astudillo. From Softmax to Sparsemax: A Sparse Model of +Attention and Multi-Label Classification. In _Proceedings of ICML_, 2016. + + +[49] Arthur Mensch and Mathieu Blondel. Differentiable Dynamic Programming for Structured Prediction and +Attention. In _Proceedings of ICML_, 2018. + + +[50] Andriy Mnih and Karol Gregor. Neural Variational Inference and Learning in Belief Networks. In +_Proceedings of ICML_, 2014. + + +[51] Andriy Mnih and Danilo J. Rezende. Variational Inference for Monte Carlo Objectives. In _Proceedings of_ +_ICML_, 2016. + + +[52] Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. _arXiv preprint_ +_arXiv:1602.06725_, 2016. + + +[53] Volodymyr Mnih, Nicola Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent Models of Visual +Attention. In _Proceedings of NIPS_, 2015. + + +[54] Vlad Niculae and Mathieu Blondel. A Regularized Framework for Sparse and Structured Neural Attention. +In _Proceedings of NIPS_, 2017. + + +[55] Vlad Niculae, André F. T. Martins, Mathieu Blondel, and Claire Cardie. SparseMAP: Differentiable Sparse +Structured Inference. In _Proceedings of ICML_, 2018. + + +[56] Roman Novak, Michael Auli, and David Grangier. Iterative Refinement for Machine Translation. +_arXiv:1610.06602_, 2016. + + +[57] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global Vectors for Word +Representation. In _Proceedings of EMNLP_, 2014. + + +[58] Colin Raffel, Minh-Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. Online and Linear-Time +Attention by Enforcing Monotonic Alignments. In _Proceedings of ICML_, 2017. + + +[59] Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black Box Variational Inference. In _Proceedings of_ +_AISTATS_, 2014. + + +[60] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real-Time Object +Detection with Region Proposal Networks. In _Proceedings of NIPS_, 2015. + + +[61] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In _Proceedings of ICML_, 2014. + + +[62] Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. Reasoning about Entailment with Neural Attention. In _Proceedings of ICLR_, 2016. + + +[63] Alexander M. Rush, Sumit Chopra, and Jason Weston. A Neural Attention Model for Abstractive Sentence +Summarization. In _Proceedings of EMNLP_, 2015. + + +[64] Philip Schulz, Wilker Aziz, and Trevor Cohn. A Stochastic Decoder for Neural Machine Translation. In +_Proceedings of ACL_, 2018. + + +[65] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Words with +Subword Units. In _Proceedings of ACL_, 2016. + + +12 + + +[66] Iulian Vlad Serban, Alessandro Sordoni, Laurent Charlin Ryan Lowe, Joelle Pineau, Aaron Courville, and +Yoshua Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. In +_Proceedings of AAAI_, 2017. + + +[67] Shiv Shankar, Siddhant Garg, and Sunita Sarawagi. Surprisingly Easy Hard-Attention for Sequence to +Sequence Learning. In _Proceedings of EMNLP_, 2018. + + +[68] Bonggun Shin, Falgun H Chokshi, Timothy Lee, and Jinho D Choi. Classification of Radiology Reports +Using Neural Attention Models. In _Proceedings of IJCNN_, 2017. + + +[69] Akash Srivastava and Charles Sutton. Autoencoding Variational Inference for Topic Models. In _Proceed-_ +_ings of ICLR_, 2017. + + +[70] Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. Variational Recurrent Neural +Machine Translation. In _Proceedings of AAAI_, 2018. + + +[71] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Networks. In +_Proceedings of NIPS_, 2015. + + +[72] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling Coverage for Neural +Machine Translation. In _Proceedings of ACL_, 2016. + + +[73] George Tucker, Andriy Mnih, Chris J. Maddison, Dieterich Lawson, and Jascha Sohl-Dickstein. REBAR: +Low-variance, Unbiased Gradient Estimates for Discrete Latent Variable Models. In _Proceedings of NIPS_, +2017. + + +[74] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz +Kaiser, and Illia Polosukhin. Attention is All You Need. In _Proceedings of NIPS_, 2017. + + +[75] Stephan Vogel, Hermann Ney, and Christoph Tillmann. HMM-based Word Alignment in Statistical +Translation. In _Proceedings of COLING_, 1996. + + +[76] Ronald J. Williams. Simple Statistical Gradient-following Algorithms for Connectionist Reinforcement +Learning. _Machine Learning_, 8, 1992. + + +[77] Sam Wiseman and Alexander M. Rush. Sequence-to-Sequence learning as Beam Search Optimization. In +_Proceedings of EMNLP_, 2016. + + +[78] Shijie Wu, Pamela Shapiro, and Ryan Cotterell. Hard Non-Monotonic Attention for Character-Level +Transduction. In _Proceedings of EMNLP_, 2018. + + +[79] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim +Krikun, Yuan Cao, Klaus Macherey Qin Gao, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, +Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, Nishant Patil +George Kurian, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg +Corrado, Macduff Hughes, and Jeffrey Dean. Google’s Neural Machine Translation System: Bridging the +Gap between Human and Machine Translation. _arXiv:1609.08144_, 2016. + + +[80] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, +and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In +_Proceedings of ICML_, 2015. + + +[81] Zichao Yang, Kiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked Attention Networks for +Image Question Answering. In _Proceedings of CVPR_, 2016. + + +[82] Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. The Neural Noisy Channel. +In _Proceedings of ICLR_, 2017. + + +[83] Lei Yu, Jan Buys, and Phil Blunsom. Online Segment to Segment Neural Transduction. In _Proceedings of_ +_EMNLP_, 2016. + + +[84] Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. Variational Neural Machine Translation. +In _Proceedings of EMNLP_, 2016. + + +[85] Chen Zhu, Yanpeng Zhao, Shuaiyi Huang, Kewei Tu, and Yi Ma. Structured Attentions for Visual Question +Answering. In _Proceedings of ICCV_, 2017. + + +13 + + +## **Supplementary Materials for** **Latent Alignment and Variational Attention** + +**Appendix A: Proof of Proposition 1** + + +**Proposition.** _Define gx,y_ ˆ : ∆ _[T][ −]_ [1] _�→_ [0 _,_ 1] _to be the function given by gx,y_ ˆ( _z_ ) = _f_ ( _x, z_ ) _y_ ˆ _(i.e._ +_gx,y_ ˆ( _z_ ) = _p_ ( _y_ = ˆ _y | x,_ ˜ _x, z_ )) _for a twice differentiable function f_ _. Let Hgx,y_ ˆ( _z_ ) _be the Hessian of_ +_gx,y_ ˆ( _z_ ) _evaluated at z, and further suppose ∥Hgx,y_ ˆ( _z_ ) _∥_ 2 _≤_ _c for all z ∈_ ∆ _[T][ −]_ [1] _,_ ˆ _y ∈Y, and x, where_ +_∥· ∥_ 2 _is the spectral norm. Then for all_ ˆ _y ∈Y,_ + + +_| p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ) _−_ _p_ soft( _y_ = ˆ _y | x,_ ˜ _x_ ) _| ≤_ _c_ + + +_Proof._ We begin by performing Taylor’s expansion of _gx,y_ ˆ at E[ _z_ ]: + + + - E[ _gx,y_ ˆ( _z_ )] = E _gx,y_ ˆ(E[ _z_ ]) + ( _z −_ E[ _z_ ]) _[⊤]_ _∇gx,y_ ˆ(E[ _z_ ]) + [1] + +2 [(] _[z][ −]_ [E][[] _[z]_ [])] _[⊤][H][g][x,][y]_ [ˆ][(ˆ] _[z]_ [)(] _[z][ −]_ [E][[] _[z]_ [])] + += _gx,y_ ˆ(E[ _z_ ]) + [1] + +2 [E][[(] _[z][ −]_ [E][[] _[z]_ [])] _[⊤][H][g][x,][y]_ [ˆ][(ˆ] _[z]_ [)(] _[z][ −]_ [E][[] _[z]_ [])]] + + +for some ˆ _z_ = _λz_ + (1 _−_ _λ_ )E[ _z_ ] _, λ ∈_ [0 _,_ 1]. Then letting _u_ = _z −_ E[ _z_ ], we have + + +_u_ _[⊤]_ _u_ +_|_ ( _z −_ E[ _z_ ]) _[⊤]_ _Hgx,y_ ˆ(ˆ _z_ )( _z −_ E[ _z_ ]) _|_ = _| ∥u∥_ [2] 2 _∥u∥_ 2 _Hgx,y_ ˆ(ˆ _z_ ) _∥u∥_ 2 _|_ + +_≤∥u∥_ [2] 2 _[c]_ + + +where _c_ = max _{|λ_ max _|, |λ_ min _|}_ is the largest absolute eigenvalue of _Hgx,y_ ˆ(ˆ _z_ ). (Here _λ_ max and _λ_ min +are maximum/minimum eigenvalues of _HgX,q_ (ˆ _z_ )). Note that _c_ is also equal to the spectral norm +_∥HgX,q_ (ˆ _z_ ) _∥_ 2 since the Hessian is symmetric. + + +Then, + + +_|_ E[( _z −_ E[ _z_ ]) _[⊤]_ _Hgx,y_ ˆ(ˆ _z_ )( _z −_ E[ _z_ ])] _| ≤_ E[ _|_ ( _z −_ E[ _z_ ]) _[⊤]_ _Hgx,y_ ˆ(ˆ _z_ )( _z −_ E[ _z_ ]) _|_ ] + +_≤_ E[ _∥u∥_ [2] 2 _[c]_ []] +_≤_ 2 _c_ + + +Here the first inequality follows due to the convexity of the absolute value function and the last +inequality follows since + + +_∥u∥_ [2] 2 [= (] _[z][ −]_ [E][[] _[z]_ [])] _[⊤]_ [(] _[z][ −]_ [E][[] _[z]_ [])] + += _z_ _[⊤]_ _z_ + E[ _z_ ] _[⊤]_ E[ _z_ ] _−_ 2E[ _z_ ] _[⊤]_ _z_ + +_≤_ _z_ _[⊤]_ _z_ + E[ _z_ ] _[⊤]_ E[ _z_ ] +_≤_ 2 + + +where the last two inequalities are due to the fact that _z,_ E[ _z_ ] _∈_ ∆ _[T][ −]_ [1] . Then putting it all together +we have, + + +_| p_ ( _y_ = ˆ _y | x,_ ˜ _x_ ) _−_ _p_ soft( _y_ = ˆ _y | x,_ ˜ _x_ ) _|_ = _|_ E[ _gx,y_ ˆ( _z_ )] _−_ _gx,y_ ˆ(E[ _z_ ]) _|_ + += [1] + +2 _[|]_ [ E][[(] _[z][ −]_ [E][[] _[z]_ [])] _[⊤][H][g][x,][y]_ [ˆ][(ˆ] _[z]_ [)(] _[z][ −]_ [E][[] _[z]_ [])]] _[ |]_ + +_≤_ _c_ + + +14 + + +**Appendix B: Experimental Setup** + + +**Neural Machine Translation** + + +For data processing we closely follow the setup in [21], which uses Byte Pair Encoding over the +combined source/target training set to obtain a vocabulary size of 14,000 tokens. However, different +from [21] which uses maximum sequence length of 175, for faster training we only train on sequences +of length up to 125. + + +The encoder is a two-layer bi-directional LSTM with 512 units in each direction, and the decoder as +a two-layer LSTM with with 768 units. For the decoder, the convex combination of source hidden +states at each time step from the attention distribution is used as additional input at the next time step. +Word embedding is 512-dimensional. + + +The inference network consists of two bi-directional LSTMs (also two-layer and 512-dimensional +each) which is run over the source/target to obtain the hidden states at each time step. These hidden +states are combined using bilinear attention [45] to produce the variational parameters. (In contrast +the generative model uses MLP attention from [6], though we saw little difference between the two +parameterizations). Only the word embedding is shared between the inference network and the +generative model. + + +Other training details include: batch size of 6, dropout rate of 0.3, parameter initialization over a +uniform distribution _U_ [ _−_ 0 _._ 1 _,_ 0 _._ 1], gradient norm clipping at 5, and training for 30 epochs with Adam +(learning rate = 0.0003, _β_ 1 = 0.9, _β_ 2 = 0.999) [35] with a learning rate decay schedule which starts +halving the learning rate if validation perplexity does not improve. Most models converged well +before 30 epochs. + + +For decoding we use beam search with beam size 10 and length penalty _α_ = 1, from [79]. The length +penalty added about 0.5 BLEU points across all the models. + + +**Visual Question Answering** + + +The model first obtains object features by mean-pooling the pretrained ResNet-101 features [27] +(which are 2048-dimensional) over object regions given by Faster R-CNN [60].The ResNet features +are kept fixed and not fine-tuned during training. We fix the maximum number of possible regions to +be 36. For the question embedding we use a one-layer LSTM with 1024 units over word embeddings. +The word embeddings are 300-dimensional and initialized with GloVe [57]. The generative model +produces a distribution over the possible objects via applying MLP attention, i.e. + + +_p_ ( _zi_ = 1 _| x,_ ˜ _x_ ) _∝_ exp( _w_ _[⊤]_ tanh( **W** 1 _xi_ + **W** 2 _x_ ˜)) + + +The selected image region is concatenated with the question embedding and fed to a one-layer MLP +with ReLU non-linearity and 1024 hidden units. + + +The inference network produces a categorical distribution over the image regions by interacting +the answer embedding _hy_ (which are 256-dimensional and initialized randomly) with the question +embedding ˜ _x_ and the image regions _xi_, + + +_q_ ( _zi_ = 1) _∝_ exp( _u_ _[⊤]_ tanh( **U** 1( _xi ⊙_ ReLU( **V** 1 _hy_ )) + **U** 2(˜ _x ⊙_ ReLU( **V** 2 _hy_ )))) + + +where _⊙_ denotes element-wise multiplication. The generative/inference attention MLPs have 1024 +hidden units each (i.e. _w, u ∈_ R [1024] ). + + +Other training details include: batch size of 512, dropout rate of 0.5 on the penultimate layer (i.e. +before affine transformation into answer vocabulary), and training for 50 epochs with with Adam +(learning rate = 0.0005, _β_ 1 = 0.9, _β_ 2 = 0.999) [35]. + + +In cases where there is more than one answer for a given question/image pair, we randomly sample +the answer, where the sampling probability is proportional to the number of humans who gave the +answer. + + +15 + + +**Appendix C: Additional Visualizations** + + +(a) (b) + + +(c) (d) + + +(e) (f) + + +Figure 4: (Left Column) Further examples highlighting the difference between the prior alignment (red) and +the variational posterior (blue) when translating from DE-EN (left-to-right). The variational posterior is able to +better handle reordering; in (a) the variational posterior successfully aligns ‘turning’ to ‘verwandelt’, in (c) we +see a similar pattern with the alignment of the clause ‘that’s my brand’ to ‘das ist meine marke’. In (e) the prior +and posterior both are confused by the ‘-ial’ in ‘territor-ial’, however the posterior still remains more accurate +overall and correctly aligns the rest of ‘revierverhalten’ to ‘territorial behaviour’. (Right Column) Additional +comparisons between soft attention (green) and the prior alignments of variational attention (red). Alignments +from both models are similar, but variational attention is lower entropy. Both soft and variational attention rely +on aligning the inserted English word ‘orientation’ to the comma in (b) since a direct translation does not appear +in the German source. + + +16 + + diff --git a/alignment-papers-text/2002.03518_Multilingual_Alignment_of_Contextual_Word_Represen.md b/alignment-papers-text/2002.03518_Multilingual_Alignment_of_Contextual_Word_Represen.md new file mode 100644 index 0000000000000000000000000000000000000000..210b4d0842439ce05a241869a8ad390a3ab25ea6 --- /dev/null +++ b/alignment-papers-text/2002.03518_Multilingual_Alignment_of_Contextual_Word_Represen.md @@ -0,0 +1,1132 @@ +Published as a conference paper at ICLR 2020 + +## MULTILINGUAL ALIGNMENT OF CONTEXTUAL WORD REPRESENTATIONS + + +**Steven Cao, Nikita Kitaev & Dan Klein** +Computer Science Division +University of California, Berkeley +_{_ stevencao,kitaev,klein _}_ @berkeley.edu + + +ABSTRACT + + +We propose procedures for evaluating and strengthening contextual embedding +alignment and show that they are useful in analyzing and improving multilingual +BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, +remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream +zero-shot transfer. Using this word retrieval task, we also analyze BERT and +find that it exhibits systematic deficiencies, e.g. worse alignment for open-class +parts-of-speech and word pairs written in different scripts, that are corrected by +the alignment procedure. These results support contextual alignment as a useful +concept for understanding large multilingual pre-trained models. + + +1 INTRODUCTION + + +Figure 1: t-SNE (Maaten & Hinton, 2008) visualization of the embedding space of multilingual +BERT for English-German word pairs (left: pre-alignment, right: post-alignment). Each point is a +different instance of the word in the Europarl corpus. This figure suggests that BERT begins already +somewhat aligned out-of-the-box but becomes much more aligned after our proposed procedure. + + +Embedding alignment was originally studied for word vectors with the goal of enabling cross-lingual +transfer, where the embeddings for two languages are in alignment if word translations, e.g. _cat_ and +_Katze_, have similar representations (Mikolov et al., 2013a; Smith et al., 2017). Recently, large pretrained models have largely subsumed word vectors based on their accuracy on downstream tasks, +partly due to the fact that their word representations are context-dependent, allowing them to more +richly capture the meaning of a word (Peters et al., 2018; Howard & Ruder, 2018; Radford et al., +2018; Devlin et al., 2018). Therefore, with the same goal of cross-lingual transfer but for these more +complex models, we might consider contextual embedding alignment, where we observe whether +word pairs within parallel sentences, e.g. _cat_ in _“The cat sits”_ and _Katze_ in _“Die Katze sitzt,”_ have +similar representations. + + +1 + + +Published as a conference paper at ICLR 2020 + + +One model relevant to these questions is multilingual BERT, a version of BERT pre-trained on 104 +languages that achieves remarkable transfer on downstream tasks. For example, after the model is +fine-tuned on the English MultiNLI training set, it achieves 74.3% accuracy on the test set in Spanish, which is only 7.1% lower than the English accuracy (Devlin et al., 2018; Conneau et al., 2018b). +Furthermore, while the model transfers better to languages similar to English, it still achieves reasonable accuracies even on languages with different scripts. + + +However, given the way that multilingual BERT was pre-trained, it is unclear why we should expect +such high zero-shot performance. Compared to monolingual BERT which exhibits no zero-shot +transfer, multilingual BERT differs only in that (1) during pre-training (i.e. masked word prediction), +each batch contains sentences from all of the languages, and (2) it uses a single shared vocabulary, +formed by WordPiece on the concatenated monolingual corpora (Devlin et al., 2019). Therefore, +we might wonder: (1) How can we better understand BERT’s multilingualism? (2) Can we further +improve BERT’s cross-lingual transfer? + + +In this paper, we show that contextual embedding alignment is a useful concept for addressing +these questions. First, we propose a contextual version of word retrieval to evaluate the degree +of alignment, where a model is presented with two parallel corpora, and given a word within a +sentence in one corpus, it must find the correct word and sentence in the other. Using this metric +of alignment, we show that multilingual BERT achieves zero-shot transfer because its embeddings +are partially aligned, as depicted in Figure 1, with the degree of alignment predicting the degree of +downstream transfer. + + +Next, using between 10K and 250K sentences per language from the Europarl corpus as parallel +data (Koehn, 2005), we propose a fine-tuning-based alignment procedure and show that it significantly improves BERT as a multilingual model. Specifically, on zero-shot XNLI, where the model +is trained on English MultiNLI and tested on other languages (Conneau et al., 2018b), the aligned +model improves accuracies by 2.78% on average over the base model, and it remarkably matches +translate-train models for Bulgarian and Greek, which approximate the fully-supervised setting. + + +To put our results in the context of past work, we also use word retrieval to compare our finetuning procedure to two alternatives: (1) fastText augmented with sentence and aligned using rotations (Bojanowski et al., 2017; R¨uckl´e et al., 2018; Artetxe et al., 2018), and (2) BERT aligned using +rotations (Aldarmaki & Diab, 2019; Schuster et al., 2019; Wang et al., 2019). We find that when +there are multiple occurences per word, fine-tuned BERT outperforms fastText, which outperforms +rotation-aligned BERT. This result supports the intuition that contextual alignment is more difficult +than its non-contextual counterpart, given that a rotation, at least when applied naively, is no longer +sufficient to produce strong alignments. In addition, when there is only one occurrence per word, +fine-tuned BERT matches the performance of fastText. Given that context disambiguation is no +longer necessary, this result suggests that our fine-tuning procedure is able to align BERT at the type +level to a degree that matches non-contextual approaches. + + +Finally, we use the contextual word retrieval task to conduct finer-grained analysis of multilingual +BERT, with the goal of better understanding its strengths and shortcomings. Specifically, we find +that base BERT has trouble aligning open-class compared to closed-class parts-of-speech, as well +as word pairs that have large differences in usage frequency, suggesting insight into the pre-training +procedure that we explore in Section 5. Together, these experiments support contextual alignment +as an important task that provides useful insight into large multilingual pre-trained models. + + +2 RELATED WORK + + +**Word vector alignment.** There has been a long line of works that learn aligned word vectors +from varying levels of supervision (Ruder et al., 2019). One popular family of methods starts with +word vectors learned independently for each language (using a method like skip-gram with negative +sampling (Mikolov et al., 2013b)), and it learns a mapping from source language vectors to target +language vectors with a bilingual dictionary as supervision (Mikolov et al., 2013a; Smith et al., +2017; Artetxe et al., 2017). When the mapping is constrained to be an orthogonal linear transformation, the optimal mapping that minimizes distances between word pairs can be solved in closed +form (Artetxe et al., 2016; Schonemann, 1966). Alignment is evaluated using bilingual lexicon induction, so these papers also propose ways to mitigate the hubness problem in nearest neighbors, + + +2 + + +Published as a conference paper at ICLR 2020 + + +e.g. by using alternate similarity functions like CSLS (Conneau et al., 2018a). A recent set of works +has also shown that the mapping can be learned with minimal to no supervision by starting with +some minimal seed dictionary and alternating between learning the linear map and inducing the dictionary (Artetxe et al., 2018; Conneau et al., 2018a; Hoshen & Wolf, 2018; Xu et al., 2018; Chen & +Cardie, 2018). + + +**Incorporating context into alignment.** One key challenge in making alignment context aware is +that the embeddings are now different across multiple occurrences of the same word. Past papers +have handled this issue by removing context and aligning the “average sense” of a word. In one +such study, Schuster et al. (2019) learn a rotation to align contextual ELMo embeddings (Peters +et al., 2018) with the goal of improving zero-shot multilingual dependency parsing, and they handle +context by taking the average embedding for a word in all of its contexts. In another paper, Aldarmaki & Diab (2019) learn a rotation on sentence vectors, produced by taking the average word +vector over the sentence, and they show that the resulting alignment also works well for word-level +tasks. In a contemporaneous work, Wang et al. (2019) align not only the word but also the context +by learning a linear transformation using word-aligned parallel data to align multilingual BERT, +with the goal of improving zero-shot dependency parsing numbers. In this paper, we similarly align +not only the word but also the context, and we also depart from these past works by using more +expressive alignment methods than rotation. + + +**Incorporating parallel texts into pre-training.** Instead of performing alignment post-hoc, another line of works proposes contextual pre-training procedures that are more cross-lingually-aware. +Wieting et al. (2019) pre-train sentence embeddings using parallel texts by maximizing similarity between sentence pairs while minimizing similarity with negative examples. Lample & Conneau (2019) propose a cross-lingual pre-training objective that incorporates parallel data in addition to monolingual corpora, leading to improved downstream cross-lingual transfer. In contrast, +our method uses less parallel data and aligns existing pre-trained models rather than requiring pretraining from scratch. + + +**Analyzing multilingual BERT.** Pires et al. (2019) present a series of probing experiments to better +understand multilingual BERT, and they find that transfer is possible even between dissimilar languages, but that it works better between languages that are typologically similar. They conclude that +BERT is remarkably multilingual but falls short for certain language pairs. + + +3 METHODS + + +3.1 MULTILINGUAL PRE-TRAINING + + +We first briefly describe multilingual BERT (Devlin et al., 2018). Like monolingual BERT, multilingual BERT is pre-trained on sentences from Wikipedia to perform two tasks: masked word +prediction, where it must predict words that are masked within a sentence, and next sentence prediction, where it must predict whether the second sentence follows the first one. The model is trained +on 104 languages, with each batch containing training sentences from each language, and it uses a +shared vocabulary formed by WordPiece on the 104 Wikipedias concatenated (Wu et al., 2016). + + +3.2 DEFINING AND EVALUATING CONTEXTUAL ALIGNMENT + + +In the following sections, we describe how to define, evaluate, and improve contextual alignment. Given two languages, a model is in _contextual alignment_ if it has similar representations +for word pairs within parallel sentences. More precisely, suppose we have _N_ parallel sentences +_C_ = _{_ ( **s** [1] _,_ **t** [1] ) _, ...,_ ( **s** _[N]_ _,_ **t** _[N]_ ) _}_, where ( **s** _,_ **t** ) is a source-target sentence pair. Also, let each sentence +pair ( **s** _,_ **t** ) have word pairs, denoted _a_ ( **s** _,_ **t** ) = _{_ ( _i_ 1 _, j_ 1) _, ...,_ ( _im, jm_ ) _}_, containing position tuples +( _i, j_ ) such that the words **s** _i_ and **t** _j_ are translations of each other. [1] We will use _f_ to represent a +pre-trained model such that _f_ ( _i,_ **s** ) is the contextual embedding for the _i_ th word in **s** . + + +1These pairs are called word alignments in the machine translation community, but we use the term “word +pairs” to avoid confusion with embedding alignment. Also, because BERT operates on subwords while the +corpus is aligned at the word level, we keep only the BERT vector for the last subword of each word. + + +3 + + +Published as a conference paper at ICLR 2020 + + +As an example, we might have the following sentence pair: + + +0 1 2 3 4 0 1 2 3 4 5 +**s** = _{I_ _ate_ _the_ _apple_ _.}_ **t** = _{Ich_ _habe_ _den_ _Apfel_ _gegessen_ _.}_ +_a_ ( **s** _,_ **t** ) = _{_ (0 _,_ 0) _,_ (1 _,_ 4) _,_ (2 _,_ 2) _,_ (3 _,_ 3) _,_ (4 _,_ 5) _}_ + + +Then, using the parallel corpus _C_, we can measure the contextual alignment of the model _f_ using its +accuracy in _contextual word retrieval_ . In this task, the model is presented with two parallel corpora, +and given a word within a sentence in one corpus, it must find the correct word and sentence in the +other. Specifically, we can define a nearest neighbor retrieval function + + +neighbor( _i,_ **s** ; _f, C_ ) = argmax sim( _f_ ( _i,_ **s** ) _, f_ ( _j,_ **t** )) _,_ +**t** _∈C,_ 0 _≤j≤_ len( **t** ) + + +where _i_ and _j_ denote the position within a sentence and sim is a similarity function. The accuracy +is then given by the percentage of exact matches over the entire corpus, or + + + +_A_ ( _f_ ; _C_ ) = [1] + +_N_ + + + + + + +( **s** _,_ **t** ) _∈C_ + + + + + +I(neighbor( _i,_ **s** ; _f, C_ ) = ( _j,_ **t** )) _,_ + +( _i,j_ ) _∈a_ ( **s** _,_ **t** ) + + + +where I represents the indicator function. We can perform the same procedure in the other direction, +where we pull target words given source words, so we report the average between the two directions. +As our similarity function, we use CSLS, a modified version of cosine similarity that mitigates +the hubness problem, with neighborhood size 10 (Conneau et al., 2018a). One additional point is +that this procedure can be made more or less contextual based on the corpus: a corpus with more +occurrences for each word type requires better representations of context. Therefore, we also test +non-contextual word retrieval by removing all but the first occurrence of each word type. + + +Given parallel data, these word pairs can be procured in an unsupervised fashion using standard +techniques developed by the machine translation community (Brown et al., 1993). While these +methods can be noisy, by running the algorithm in both the source-target and target-source directions +and only keeping word pairs in their intersection, we can trade-off coverage for accuracy, producing +a reasonably high-precision dataset (Och & Ney, 2003). + + +3.3 ALIGNING PRE-TRAINED CONTEXTUAL EMBEDDINGS + + +To improve the alignment of the model _f_ with respect to the corpus _C_, we can encapsulate alignment +in the loss function + + + +_L_ ( _f_ ; _C_ ) = _−_ + +( **s** _,_ **t** ) _∈C_ + + + + + - sim( _f_ ( _i,_ **s** ) _, f_ ( _j,_ **t** )) _,_ + + +( _i,j_ ) _∈a_ ( **s** _,_ **t** ) + + + +where we sum the similarities between word pairs. Because the CSLS metric is not easily optimized, +we instead use the squared error loss, or sim( _f_ ( _i,_ **s** ) _, f_ ( _j,_ **t** )) = _−||f_ ( _i,_ **s** ) _−_ _f_ ( _j,_ **t** ) _||_ [2] 2 [.] + + +However, note that this loss function does not account for the informativity of _f_ ; for example, it is +zero if _f_ is constant. Therefore, at a high level, we would like to minimize _L_ ( _f_ ; _C_ ) while maintaining some aspect of _f_ that makes it useful, e.g. its high accuracy when fine-tuned on downstream +tasks. Letting _f_ 0 denote the initial pre-trained model before alignment, we achieve this goal by +defining a regularization term + + + +_R_ ( _f_ ; _C_ ) = + +**t** _∈C_ + + + +len( **t** ) + +- _||f_ ( _j,_ **t** ) _−_ _f_ 0( _j,_ **t** ) _||_ [2] 2 _[,]_ + + +_i_ =1 + + + +which imposes a penalty if the target language embeddings stray from their initialization. Then, +we sample minibatches _B ⊂_ _C_ and take gradient steps of the function _L_ ( _f_ ; _B_ ) + _λR_ ( _f_ ; _B_ ) directly on the weights of _f_, which moves the source embeddings toward the target embeddings while +preventing the latter from drifting too far. In our experiments, we set _λ_ = 1. + + +In the multilingual case, suppose we have _k_ parallel corpora _C_ [1] _, ..., C_ _[k]_, where each corpus has a +different source language with the target language as English. Then, we sample equal-sized batches +_B_ _[i]_ _⊂_ _C_ _[i]_ from each corpus and take gradient steps on [�] _i_ _[k]_ =1 _[L]_ [(] _[f]_ [;] _[ B][i]_ [) +] _[ λR]_ [(] _[f]_ [;] _[ B][i]_ [)][, which moves] +all of the non-English embeddings toward English. + + +4 + + +Published as a conference paper at ICLR 2020 + + +Note that this alignment method departs from prior work, in which each non-English language is +rotated to match the English embedding space through individual learned matrices. Specifically, the +most widely used post-hoc alignment method learns a rotation _W_ applied to the source vectors to +minimize the distance between parallel word pairs, or + + + + + - _||Wf_ ( _i,_ **s** ) _−_ _f_ ( _j,_ **t** ) _||_ [2] 2 _s.t._ _W_ _[⊤]_ _W_ = _I._ (1) + + +( _i,j_ ) _∈a_ ( **s** _,_ **t** ) + + + +min +_W_ + + + + + + +( **s** _,_ **t** ) _∈C_ + + + +This problem is known as the Procrustes problem and can be solved in closed form (Schonemann, +1966). This approach has the nice property that the vectors are only rotated, preserving distances +and therefore the semantic information captured by the embeddings (Artetxe et al., 2016). However, +rotation requires the strong assumption that the embedding spaces are roughly isometric (Søgaard +et al., 2018), an assumption that may not hold for contextual pre-trained models because they represent more aspects of a word than just its type, i.e. context and syntax, which are less likely to +be isomorphic between languages. Given that past work has also found independent alignment per +language pair to be inferior to joint training (Heyman et al., 2019), another advantage of our method +is that the alignment for all languages is done simultaneously. + + +As our dataset, we use the Europarl corpora for English paired with Bulgarian, German, Greek, +Spanish, and French, the languages represented in both Europarl and XNLI (Koehn, 2005). After +tokenization (Koehn et al., 2007), we produce word pairs using fastAlign and keep the one-to-one +pairs in the intersection (Dyer et al., 2013). We use the most recent 1024 sentences as the test set, the +previous 1024 sentences as the development set, and the following 250K sentences as the training +set. Furthermore, we modify the test set accuracy calculation to only include word pairs not seen in +the training set. We also remove any exact matches, e.g. punctuation and numbers, because BERT is +already aligned for these pairs due to its shared vocabulary. Given that parallel data may be limited +for low-resource language pairs, we also report numbers for 10K and 50K parallel sentences. + + +3.4 SENTENCE-AUGMENTED NON-CONTEXTUAL BASELINE + + +Given that there has been a long line of work on word vector alignment (Artetxe et al., 2018; Conneau et al., 2018a; Smith et al., 2017, _inter alia_ ), we also compare BERT to a sentence-augmented +fastText baseline (Bojanowski et al., 2017). Following Artetxe et al. (2018), we first normalize, then +mean-center, then normalize the word vectors, and we then learn a rotation with the same parallel +data as in the contextual case, as described in Equation 1. We also strengthen this baseline by including sentence information: specifically, during word retrieval, we concatenate each word vector +with a vector representing its sentence. Following R¨uckl´e et al. (2018), we compute the sentence +vector by concatenating the average, maximum, and minimum vector over all of the words in the +sentence, a method that was shown to be state-of-the-art for a suite of cross-lingual tasks. We also +experimented with other methods, such as first retrieving the sentence and then the word, but found +this method resulted in the highest accuracy. As a result, the fastText vectors are 1200-dimensional, +while the BERT vectors are 768-dimensional. + + +3.5 TESTING ZERO-SHOT TRANSFER + + +The next step is to determine whether better alignment improves cross-lingual transfer. As our +downstream task, we use the XNLI dataset, where the English MultiNLI development and test sets +are human-translated into multiple languages (Conneau et al., 2018b; Williams et al., 2018). Given +a pair of sentences, the task is to predict whether the first sentence implies the second, where there +are three labels: entailment, neutral, or contradiction. Starting from either the base or aligned multilingual BERT, we train on English and evaluate on Bulgarian, German, Greek, Spanish, and French, +the XNLI languages represented in Europarl. + + +As our architecture, following Devlin et al. (2018), we apply a linear layer followed by softmax +on the [CLS] embedding of the sentence pair, producing scores for each of the three labels. The +model is trained using cross-entropy loss and selected based on its development set accuracy averaged across all of the languages. As a fully-supervised ceiling, we also compare to models trained +and tested on the same language, where for the non-English training data, we use the machine translations of the English MultiNLI training data provided by Conneau et al. (2018b). While the quality +of the training data is affected by the quality of the MT system, this comparison nevertheless serves +as a good approximation for the fully-supervised setting. + + +5 + + +Published as a conference paper at ICLR 2020 + + +English Bulgarian German Greek Spanish French Average + + +Translate-Train + + +Base BERT 81.9 73.6 75.9 71.6 77.8 76.8 76.3 + + +Zero-Shot _[a]_ + + +Base BERT 80.4 68.7 70.4 67.0 74.5 73.4 72.4 +Sentence-aligned BERT (rotation) **81.1** 68.9 71.2 66.7 74.9 73.5 72.7 +Word-aligned BERT (rotation) 78.8 69.0 71.3 67.1 74.3 73.0 72.2 +Word-aligned BERT (fine-tuned) 80.1 **73.4** **73.1** **71.4** **75.5** **74.5** **74.7** + + +XLM (MLM + TLM) 85.0 77.4 77.8 76.6 78.9 78.7 79.1 + + +Table 1: Accuracy on the XNLI test set, where we compare to base BERT (Devlin et al., 2018) +and two rotation-based methods, sentence alignment (Aldarmaki & Diab, 2019) and word alignment (Wang et al., 2019). We also include the current state-of-the-art zero-shot achieved by +XLM (Lample & Conneau, 2019). Rotation-based methods provide small gains on some languages +but not others. On the other hand, after fine-tuning-based alignment, Bulgarian and Greek match the +translate-train ceiling, while German, Spanish, and French close roughly one-third of the gap. + + +_a_ Note that the zero-shot Base BERT numbers are slightly different from those reported in Devlin et al. +(2019) because we select a single model using the average accuracy across the six languages. This selection +method also accounts for the varying English accuracies across the zero-shot methods. + + +Sentences English Bulgarian German Greek Spanish French Average + + +None 80.4 68.7 70.4 67.0 74.5 73.4 72.4 +10K 79.2 71.0 71.8 67.5 75.3 74.1 73.2 +50K **81.1** 73 72.6 69.6 75 **74.5** 74.3 +250K 80.1 **73.4** **73.1** **71.4** **75.5** **74.5** **74.7** + + +Table 2: Zero-shot accuracy on the XNLI test set, where we align BERT with varying amounts of +parallel data. The method scales with the amount of data but achieves a large fraction of the gains +with 50K sentences per language pair. + + +4 RESULTS + + +4.1 ZERO-SHOT XNLI TRANSFER + + +First, we test whether alignment improves multilingual BERT by applying the models to zero-shot +XNLI, as displayed in Table 1. We see that our alignment procedure greatly improves accuracies, +with all languages seeing a gain of at least 1%. In particular, the Bulgarian and Greek zero-shot +numbers are boosted by almost 5% each and match the translate-train numbers, suggesting that the +alignment procedure is especially effective for languages that are initially difficult for BERT. We +also run alignment for more distant language pairs (Chinese, Arabic, Urdu) and find similar results, +which we report in the appendix. + + +Comparing to rotation-based methods (Aldarmaki & Diab, 2019; Wang et al., 2019), we find that a +rotation produces small gains for some languages, namely Bulgarian, German, and Spanish, but is +sub-optimal overall, providing evidence that the increased expressivity of our proposed procedure is +beneficial for contextual alignment. We explore this comparison more in Section 5.1. + + +4.2 ALIGNMENT WITH LESS DATA + + +Given that our goal is zero-shot transfer, we cannot expect to always have large amounts of parallel data. Therefore, we also characterize the performance of our alignment method with varying +amounts of data, as displayed in Table 2. We find that it improves transfer with as little as 10K +sentences per language, making it a promising approach for low-resource languages. + + +6 + + +Published as a conference paper at ICLR 2020 + + +bg-en de-en el-en es-en fr-en Average + + +Contextual + + +Aligned fastText + sentence 44.0 46.4 42.0 48.6 44.5 45.1 +Base BERT 19.5 26.1 13.9 32.5 28.3 24.1 +Word-aligned BERT (rotation) 29.8 31.6 20.8 36.8 31.0 30.0 +Word-aligned BERT (fine-tuned) **50.7** **51.3** **49.8** **51.0** **48.6** **50.3** + + +Non-Contextual + + +Aligned fastText + sentence 61.3 **65.4** 61.6 **71.1** 64.8 64.8 +Base BERT 29.1 37.0 22.3 46.5 41.8 35.3 +Word-aligned BERT (rotation) 39.6 43.6 32.4 51.4 46.1 42.6 +Word-aligned BERT (fine-tuned) **62.8** 64.3 **67.5** 68.4 **66.3** **65.9** + + +Table 3: Word retrieval accuracy for the aligned sentence-augmented fastText baseline and BERT +pre- and post-alignment. Across languages, base BERT has variable accuracy while fine-tuningaligned BERT is consistently effective. Fine-tuned BERT also matches fastText in a version of the +task where context is not necessary, suggesting that our method matches the type-level alignment of +fastText while also aligning context. + + +5 ANALYSIS + + +5.1 WORD RETRIEVAL + + +In the following sections, we present word retrieval results to both compare our method to past work +and better understand the strengths and weaknesses of multilingual BERT. Table 3 displays the word +retrieval accuracies for the aligned sentence-augmented fastText baseline and BERT pre- and postalignment. First, we find that in contextual retrieval, fine-tuned BERT outperforms fastText, which +outperforms rotation-aligned BERT. This result supports the intuition that aligning large pre-trained +models is more difficult than aligning word vectors, given that a rotation, at least when applied +naively, produces sub-par alignments. In addition, fine-tuned BERT matches the performance of +fastText in non-contextual retrieval, suggesting that our alignment procedure overcomes these challenges and achieves type-level alignment that matches non-contextual approaches. In the appendix, +we also provide examples of aligned BERT disambiguating between different meanings of a word, +giving qualitative evidence of the benefit of context alignment. + + +We also find that before alignment, BERT’s performance varies greatly between languages, while +after alignment it is consistently effective. In particular, Bulgarian and Greek initially have very +low accuracies. This phenomenon is also reflected in the XNLI numbers (Table 1), where Bulgarian +and Greek receive the largest boosts from alignment. Examining the connection between alignment +and zero-shot more closely, we find that the word retrieval accuracies are highly correlated with +downstream zero-shot performance (Figure 2), supporting our evaluation measure as predictive of +cross-lingual transfer. + + +The language discrepancies are also consistent with a hypothesis by Pires et al. (2019) to explain +BERT’s multilingualism. They posit that due to the shared vocabulary, shared words between languages, e.g. numbers and names, are forced to have the same representation. Then, due to the +masked word prediction task, other words that co-occur with these shared words also receive similar +representations. If this hypothesis is true, then languages with higher lexical overlap with English are +likely to experience higher transfer. As an extreme form of this phenomenon, Bulgarian and Greek +have completely different scripts and should experience worse transfer than the common-script languages, an intuition that is confirmed by the word retrieval and XNLI accuracies. The fact that all +languages are equally aligned with English post-alignment suggests that the pre-training procedure +is suboptimal for these languages. + + +7 + + +Published as a conference paper at ICLR 2020 + + +Lexical Overlap Numeral Punctuation Proper Noun Average + + +Base BERT 0.90 0.88 0.80 0.86 +Aligned BERT 0.97 0.96 0.95 0.96 + + +Closed-Class Determiner Preposition Conjunction Pronoun Auxiliary Average + + +Base BERT 0.76 0.72 0.71 0.70 0.61 0.70 +Aligned BERT 0.91 0.86 0.89 0.89 0.84 0.88 + + +Open-Class Noun Adverb Adjective Verb Average + + +Base BERT 0.61 0.57 0.50 0.49 0.54 +Aligned BERT 0.90 0.88 0.90 0.89 0.89 + + +Table 4: Accuracy by part-of-speech tag for non-contextual word retrieval. To achieve better +word type coverage, we do not remove word pairs seen in the training set. The tags are grouped into +lexically overlapping, closed-class, and open-class groups. The “Particle,” “Symbol,” “Interjection,” +and “Other” tags are omitted. + + + + + + + +74 + + +72 + + +70 + + +68 + + + + + + + +66 +15 20 25 30 + + +Contextual word retrieval accuracy + + +Figure 2: XNLI zero-shot versus word retrieval accuracy for base BERT, where each +point is a language paired with English. +This plot suggests that alignment correlates +well with cross-lingual transfer. + + +|1.00
0.95
0.90
0.85
0.80
0.75
0.70
0.65
0.60|Aligned BERT
Base BERT| +|---|---| +|0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00|100
101
102
103
104| + + + +Figure 3: Contextual word retrieval accuracy plotted +against difference in frequency rank between source +and target. The accuracy of base BERT plummets for +larger differences, suggesting that its alignment depends on word pairs having similar usage statistics. + + + + + +5.2 WORD RETRIEVAL PART-OF-SPEECH ANALYSIS + + +Next, to gain insight into the multilingual pre-training procedure, we analyze the accuracy broken +down by part-of-speech using the Universal Part-of-Speech Tagset (Petrov et al., 2012), annotated +using polyglot (Al-Rfou et al., 2013) for Bulgarian and spaCy (Honnibal & Montani, 2017) for the +other languages, as displayed in Table 4. Unsurprisingly, multilingual BERT has high alignment +out-of-the-box for groups with high lexical overlap, e.g. numerals, punctuation, and proper nouns, +due to its shared vocabulary. We further divide the remaining tags into closed-class and open-class, +where closed-class parts-of-speech correspond to fixed sets of words serving grammatical functions +(e.g. determiner, preposition, conjunction, pronoun, and auxiliary), while open-class parts-of-speech +correspond to lexical words (e.g. noun, adverb, adjective, verb). Interestingly, we see that base BERT +has consistently lower accuracy for closed-class versus open-class categories (0 _._ 70 vs 0 _._ 54), but that +this discrepancy disappears after alignment (0 _._ 88 vs 0 _._ 89). + + +5.3 USAGE HYPOTHESIS FOR ALIGNMENT + + +From this closed-class vs open-class difference, we hypothesize that BERT’s alignment of a particular word pair is influenced by the similarity of their usage statistics. Specifically, given that +BERT is trained through masked word prediction, its embeddings are in large part determined by + + +8 + + +Published as a conference paper at ICLR 2020 + + +the co-occurrences between words. Therefore, two words that are used in similar contexts should be +better aligned. This hypothesis provides an explanation of the closed-class vs open-class difference: +closed-class words are typically grammatical, so they are used in similar ways across typologically +similar languages. Furthermore, these words cannot be substituted for one another due to their +grammatical function. Therefore, their usage statistics are a strong signature that can be used for +alignment. On the other hand, open-class words can be substituted for one another: for example, in +most sentences, the noun tokens could be replaced by a wide range of semantically dissimilar nouns +with the sentence remaining syntactically well-formed. By this effect, many nouns have similar +co-occurrences, making them difficult to align through masked word prediction alone. + + +To further test this hypothesis, we plot the word retrieval accuracy versus the difference between the +frequency rank of the target and source word, where this difference measures discrepancies in usage, +as depicted in Figure 3. We see that accuracy drops off significantly as the source-target difference +increases, supporting our hypothesis. Furthermore, this shortcoming is remedied by alignment, +revealing another systematic deficiency of multilingual pre-training. + + +6 CONCLUSION + + +Given that the degree of alignment is causally predictive of downstream cross-lingual transfer, contextual alignment proves to be a useful concept for understanding and improving multilingual pretrained models. Given small amounts of parallel data, our alignment procedure improves multilingual BERT and corrects many of its systematic deficiencies. Contextual word retrieval also provides +useful new insights into the pre-training procedure, opening up new avenues for analysis. + + +REFERENCES + + +Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. Polyglot: Distributed word representations for +multilingual nlp. In _Proceedings of the Seventeenth Conference on Computational Natural Lan-_ +_guage Learning_, pp. 183–192, Sofia, Bulgaria, August 2013. Association for Computational Lin[guistics. URL http://www.aclweb.org/anthology/W13-3520.](http://www.aclweb.org/anthology/W13-3520) + + +Hanan Aldarmaki and Mona Diab. Context-aware cross-lingual mapping. In _Proceedings of the 2019_ +_Conference of the North American Chapter of the Association for Computational Linguistics:_ +_Human Language Technologies, Volume 1 (Long and Short Papers)_, pp. 3906–3911, Minneapolis, +Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1391. +[URL https://www.aclweb.org/anthology/N19-1391.](https://www.aclweb.org/anthology/N19-1391) + + +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning principled bilingual mappings of word +embeddings while preserving monolingual invariance. In _Proceedings of the 2016 Conference on_ +_Empirical Methods in Natural Language Processing_, pp. 2289–2294, Austin, Texas, November +[2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1250. URL https:](https://www.aclweb.org/anthology/D16-1250) +[//www.aclweb.org/anthology/D16-1250.](https://www.aclweb.org/anthology/D16-1250) + + +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning bilingual word embeddings with (almost) +no bilingual data. In _Proceedings of the 55th Annual Meeting of the Association for Computational_ +_Linguistics (Volume 1: Long Papers)_, pp. 451–462, Vancouver, Canada, July 2017. Association +[for Computational Linguistics. doi: 10.18653/v1/P17-1042. URL https://www.aclweb.](https://www.aclweb.org/anthology/P17-1042) +[org/anthology/P17-1042.](https://www.aclweb.org/anthology/P17-1042) + + +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In _Proceedings of the 56th Annual Meet-_ +_ing of the Association for Computational Linguistics (Volume 1: Long Papers)_, pp. 789–798, +Melbourne, Australia, July 2018. Association for Computational Linguistics. [URL https:](https://www.aclweb.org/anthology/P18-1073) +[//www.aclweb.org/anthology/P18-1073.](https://www.aclweb.org/anthology/P18-1073) + + +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors +with subword information. _Transactions of the Association for Computational Linguistics_, 5:135– +146, 2017. doi: 10.1162/tacl ~~a 0~~ [0051. URL https://www.aclweb.org/anthology/](https://www.aclweb.org/anthology/Q17-1010) +[Q17-1010.](https://www.aclweb.org/anthology/Q17-1010) + + +9 + + +Published as a conference paper at ICLR 2020 + + +Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. The +mathematics of statistical machine translation: Parameter estimation. _Comput. Linguist._, 19(2): +[263–311, June 1993. ISSN 0891-2017. URL http://dl.acm.org/citation.cfm?id=](http://dl.acm.org/citation.cfm?id=972470.972474) +[972470.972474.](http://dl.acm.org/citation.cfm?id=972470.972474) + + +Xilun Chen and Claire Cardie. Unsupervised multilingual word embeddings. In _Proceedings of the_ +_2018 Conference on Empirical Methods in Natural Language Processing_, pp. 261–270, Brussels, +Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/ +[v1/D18-1024. URL https://www.aclweb.org/anthology/D18-1024.](https://www.aclweb.org/anthology/D18-1024) + + +Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herve J’egou. +Word translation without parallel data. In _Proceedings of the 6th International Conference on_ +_Learning Representations (ICLR 2018)_ [, 2018a. URL https://arxiv.org/pdf/1710.](https://arxiv.org/pdf/1710.04087.pdf) +[04087.pdf.](https://arxiv.org/pdf/1710.04087.pdf) + + +Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger +Schwenk, and Veselin Stoyanov. XNLI: Evaluating cross-lingual sentence representations. In +_Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, pp. +2475–2485, Brussels, Belgium, October-November 2018b. Association for Computational Lin[guistics. doi: 10.18653/v1/D18-1269. URL https://www.aclweb.org/anthology/](https://www.aclweb.org/anthology/D18-1269) +[D18-1269.](https://www.aclweb.org/anthology/D18-1269) + + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep +bidirectional transformers for language understanding. _arXiv:1810.04805 [cs.CL]_, October 2018. +[URL http://arxiv.org/abs/1810.04805.](http://arxiv.org/abs/1810.04805) + + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training +of deep bidirectional transformers for language understanding. [https://github.com/](https://github.com/google-research/bert/blob/master/multilingual.md) +[google-research/bert/blob/master/multilingual.md, 2019.](https://github.com/google-research/bert/blob/master/multilingual.md) + + +Chris Dyer, Victor Chahuneau, and Noah A. Smith. A simple, fast, and effective reparameterization +of IBM model 2. In _Proceedings of the 2013 Conference of the North American Chapter of_ +_the Association for Computational Linguistics: Human Language Technologies_, pp. 644–648, +[Atlanta, Georgia, June 2013. Association for Computational Linguistics. URL https://www.](https://www.aclweb.org/anthology/N13-1073) +[aclweb.org/anthology/N13-1073.](https://www.aclweb.org/anthology/N13-1073) + + +Andreas Eisele and Yu Chen. MultiUN: A multilingual corpus from united nation documents. +In _Proceedings of the Seventh International Conference on Language Resources and Eval-_ +_uation (LREC’10)_, Valletta, Malta, May 2010. European Language Resources Association +[(ELRA). URL http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_](http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf) +[Paper.pdf.](http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf) + + +Geert Heyman, Bregt Verreet, Ivan Vuli´c, and Marie-Francine Moens. Learning unsupervised multilingual word embeddings with incremental multilingual hubs. In _Proceedings of the 2019 Con-_ +_ference of the North American Chapter of the Association for Computational Linguistics: Human_ +_Language Technologies, Volume 1 (Long and Short Papers)_, pp. 1890–1902, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1188. URL +[https://www.aclweb.org/anthology/N19-1188.](https://www.aclweb.org/anthology/N19-1188) + + +Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear, 2017. + + +Yedid Hoshen and Lior Wolf. Non-adversarial unsupervised word translation. In _Proceedings of the_ +_2018 Conference on Empirical Methods in Natural Language Processing_, pp. 469–478, Brussels, +Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/ +[v1/D18-1043. URL https://www.aclweb.org/anthology/D18-1043.](https://www.aclweb.org/anthology/D18-1043) + + +Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. +In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_ +_(Volume 1: Long Papers)_, pp. 328–339. Association for Computational Linguistics, 2018. URL +[http://aclweb.org/anthology/P18-1031.](http://aclweb.org/anthology/P18-1031) + + +10 + + +Published as a conference paper at ICLR 2020 + + +Philipp Koehn. Europarl: A parallel corpus for statistical machine translation. In _Conference Pro-_ +_ceedings: The Tenth Machine Translation Summit_, pp. 79–86, Phuket, Thailand, 2005. AAMT. + + +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola +Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In _Proceedings of the 45th Annual Meeting of the Association for Com-_ +_putational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions_, pp. +177–180, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL +[https://www.aclweb.org/anthology/P07-2045.](https://www.aclweb.org/anthology/P07-2045) + + +Guillame Lample and Alexis Conneau. Cross-lingual language model pretraining. 2019. URL +[https://arxiv.org/pdf/1901.07291.pdf.](https://arxiv.org/pdf/1901.07291.pdf) + + +Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. _Journal of Ma-_ +_chine Learning Research_ [, 9:2579–2605, 2008. URL http://www.jmlr.org/papers/v9/](http://www.jmlr.org/papers/v9/vandermaaten08a.html) +[vandermaaten08a.html.](http://www.jmlr.org/papers/v9/vandermaaten08a.html) + + +Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for ma[chine translation. 2013a. URL https://arxiv.org/pdf/1309.4168.pdf.](https://arxiv.org/pdf/1309.4168.pdf) + + +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In _Proceedings of the 26th International_ +_Conference on Neural Information Processing Systems - Volume 2_, NIPS’13, pp. 3111–3119, +USA, 2013b. Curran Associates Inc. [URL http://dl.acm.org/citation.cfm?id=](http://dl.acm.org/citation.cfm?id=2999792.2999959) +[2999792.2999959.](http://dl.acm.org/citation.cfm?id=2999792.2999959) + + +Franz Josef Och and Hermann Ney. A systematic comparison of various statistical alignment +models. _Comput. Linguist._, 29(1):19–51, March 2003. ISSN 0891-2017. doi: 10.1162/ +[089120103321337421. URL http://dx.doi.org/10.1162/089120103321337421.](http://dx.doi.org/10.1162/089120103321337421) + + +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and +Luke Zettlemoyer. Deep contextualized word representations. In _Proceedings of the 2018 Con-_ +_ference of the North American Chapter of the Association for Computational Linguistics: Hu-_ +_man Language Technologies, Volume 1 (Long Papers)_, pp. 2227–2237, New Orleans, Louisiana, +June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1202. URL +[https://www.aclweb.org/anthology/N18-1202.](https://www.aclweb.org/anthology/N18-1202) + + +Slav Petrov, Dipanjan Das, and Ryan McDonald. A universal part-of-speech tagset. In _Proceed-_ +_ings of the Eighth International Conference on Language Resources and Evaluation (LREC-_ +_2012)_, pp. 2089–2096, Istanbul, Turkey, May 2012. European Languages Resources Association +[(ELRA). URL http://www.lrec-conf.org/proceedings/lrec2012/pdf/274_](http://www.lrec-conf.org/proceedings/lrec2012/pdf/274_Paper.pdf) +[Paper.pdf.](http://www.lrec-conf.org/proceedings/lrec2012/pdf/274_Paper.pdf) + + +Telmo Pires, Eva Schlinger, and Dan Garrette. How multilingual is multilingual BERT? In +_Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, +pp. 4996–5001, Florence, Italy, July 2019. Association for Computational Linguistics. URL +[https://www.aclweb.org/anthology/P19-1493.](https://www.aclweb.org/anthology/P19-1493) + + +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018. URL [https:](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) +[//s3-us-west-2.amazonaws.com/openai-assets/research-covers/](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) +[language-unsupervised/language_understanding_paper.pdf.](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) + + +Andreas R¨uckl´e, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. Concatenated p-mean word +embeddings as universal cross-lingual sentence representations. _arXiv:1803.01400 [cs.CL]_, 2018. +[URL http://arxiv.org/abs/1803.01400.](http://arxiv.org/abs/1803.01400) + + +Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. A survey of cross-lingual word embedding models. _J. Artif. Int. Res._, 65(1):569–630, May 2019. ISSN 1076-9757. doi: 10.1613/jair.1.11640. +[URL https://doi.org/10.1613/jair.1.11640.](https://doi.org/10.1613/jair.1.11640) + + +11 + + +Published as a conference paper at ICLR 2020 + + +Peter H. Schonemann. A generalized solution of the orthogonal procrustes problem. _Psychometrika_, +31(1):1–10, 1966. + + +Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. Cross-lingual alignment of contextual +word embeddings, with applications to zero-shot dependency parsing. In _Proceedings of the 2019_ +_Conference of the North American Chapter of the Association for Computational Linguistics:_ +_Human Language Technologies, Volume 1 (Long and Short Papers)_, pp. 1599–1613, Minneapolis, +Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1162. +[URL https://www.aclweb.org/anthology/N19-1162.](https://www.aclweb.org/anthology/N19-1162) + + +Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. Offline bilingual +word vectors, orthogonal transformations and the inverted softmax. In _Proceedings of the 5th_ +_International Conference on Learning Representations (ICLR 2017)_ [, 2017. URL https://](https://openreview.net/pdf?id=r1Aab85gg) +[openreview.net/pdf?id=r1Aab85gg.](https://openreview.net/pdf?id=r1Aab85gg) + + +Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. On the limitations of unsupervised bilingual dictionary induction. In _Proceedings of the 56th Annual Meeting of the Association for_ +_Computational Linguistics (Volume 1: Long Papers)_, pp. 778–788, Melbourne, Australia, July +[2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1072. URL https:](https://www.aclweb.org/anthology/P18-1072) +[//www.aclweb.org/anthology/P18-1072.](https://www.aclweb.org/anthology/P18-1072) + + +J¨org Tiedemann. Parallel data, tools and interfaces in OPUS. In _Proceedings of the Eighth In-_ +_ternational Conference on Language Resources and Evaluation (LREC’12)_, pp. 2214–2218, Is[tanbul, Turkey, May 2012. European Language Resources Association (ELRA). URL http:](http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf) +[//www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf.](http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf) + + +Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. Cross-lingual BERT transformation for zero-shot dependency parsing. In _Proceedings of the 2019 Conference on Em-_ +_pirical Methods in Natural Language Processing and the 9th International Joint Conference on_ +_Natural Language Processing (EMNLP-IJCNLP)_, pp. 5725–5731, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1575. URL +[https://www.aclweb.org/anthology/D19-1575.](https://www.aclweb.org/anthology/D19-1575) + + +John Wieting, Kevin Gimpel, Graham Neubig, and Taylor Berg-Kirkpatrick. Simple and effective paraphrastic similarity from parallel translations. In _Proceedings of the 57th Annual_ +_Meeting of the Association for Computational Linguistics_, pp. 4602–4608, Florence, Italy, July +[2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1453. URL https:](https://www.aclweb.org/anthology/P19-1453) +[//www.aclweb.org/anthology/P19-1453.](https://www.aclweb.org/anthology/P19-1453) + + +Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North_ +_American Chapter of the Association for Computational Linguistics: Human Language Technolo-_ +_gies, Volume 1 (Long Papers)_, pp. 1112–1122, New Orleans, Louisiana, June 2018. Association +[for Computational Linguistics. doi: 10.18653/v1/N18-1101. URL https://www.aclweb.](https://www.aclweb.org/anthology/N18-1101) +[org/anthology/N18-1101.](https://www.aclweb.org/anthology/N18-1101) + + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, +Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, +Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, +Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s +neural machine translation system: Bridging the gap between human and machine translation. +_arXiv:1609.08144 [cs.CL]_, 2016. + + +Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. Unsupervised cross-lingual transfer of +word embedding spaces. In _Proceedings of the 2018 Conference on Empirical Methods in Natural_ +_Language Processing_, pp. 2465–2474, Brussels, Belgium, October-November 2018. Association +[for Computational Linguistics. doi: 10.18653/v1/D18-1268. URL https://www.aclweb.](https://www.aclweb.org/anthology/D18-1268) +[org/anthology/D18-1268.](https://www.aclweb.org/anthology/D18-1268) + + +12 + + +Published as a conference paper at ICLR 2020 + + +English Bulgarian German Greek Spanish French Arabic Chinese Urdu Average + + +Translate-Train + + +Base BERT 81.9 73.6 75.9 71.6 77.8 76.8 70.7 76.6 61.6 74.1 + + +Zero-Shot + + +Base BERT 80.4 68.7 70.4 67.0 74.5 73.4 65.6 70.6 60.3 70.1 +Aligned BERT (20K sent) **80.8** **71.6** **72.5** **68.1** **74.7** **73.6** **66.3** **71.5** **61.1** **71.1** + + +Table 5: Zero-shot accuracy on the XNLI test set with more languages, where we use 20K parallel +sentences for each language paired with English. This result confirms that the alignment method +works for distant languages and a variety of parallel corpora, including Europarl, MultiUN, and +Tanzil, which contains sentences from the Quran (Koehn, 2005; Eisele & Chen, 2010; Tiedemann, +2012). + + +A APPENDIX + + +A.1 OPTIMIZATION HYPERPARAMETERS + + +For both alignment and XNLI optimization, we use a learning rate of 5 _×_ 10 _[−]_ [5] with Adam hyperparameters _β_ = (0 _._ 9 _,_ 0 _._ 98), _ϵ_ = 10 _[−]_ [9] and linear learning rate warmup for the first 10% of the training +data. For alignment, the model is trained for one epoch, with each batch containing 2 sentence pairs +per language. For XNLI, each model is trained for 3 epochs with 32 examples per batch, and 10% +dropout is applied to the BERT embeddings. + + +A.2 ALIGNMENT OF CHINESE, ARABIC, AND URDU + + +In Table 5, we report numbers for additional languages, where we align a single BERT model for all +eight languages and then fine-tune on XNLI. We use 20K sentences per language, where we use the +MultiUN corpus for Arabic and Chinese (Eisele & Chen, 2010), the Tanzil corpus for Urdu (Tiedemann, 2012), and the Europarl corpus for the other five languages (Koehn, 2005). This result confirms that the alignment method works for a variety of languages and corpora. Furthermore, the +Tanzil corpus consists of sentences from the Quran, suggesting that the method works even when +the parallel corpus and downstream task contain sentences from entirely different domains. + + +A.3 EXAMPLES OF CONTEXT-AWARE RETRIEVAL + + +In this section, we qualitatively show that aligned BERT is able to disambiguate between different +occurences of a word. + + +First, we find two meanings of the word “like” occurring in the English-German Europarl test set. +Note also that in the second and third example, the two senses of “like” occur in the same sentence. + + +_•_ This empire did not look for colonies far from home or overseas, **like** most Western European States, but close by. +Dieses Reich suchte seine Kolonien nicht weit von zu Hause und in bersee **wie** die meisten +westeurop¨aischen Staaten, sondern in der unmittelbaren Umgebung. + + +_•_ **Like** other speakers, I would like to support the call for the arms embargo to remain. +**Wie** andere Sprecher, so m¨ochte auch ich den Aufruf zur Aufrechterhaltung des Waffenembargos untersttzen. + + +_•_ Like other speakers, I would **like** to support the call for the arms embargo to remain. +Wie andere Sprecher, so **m¨ochte** auch ich den Aufruf zur Aufrechterhaltung des Waffenembargos untersttzen. + + +_•_ I would also **like**, although they are absent, to mention the Commission and the Council. +Ich **m¨ochte** mir sogar erlauben, die Kommission und den Rat zu nennen, auch wenn sie +nicht anwesend sind. + + +13 + + +Published as a conference paper at ICLR 2020 + + +Multiple meanings of “order”: + + +_•_ Moreover, the national political elite had to make a detour in Ambon in **order** to reach the +civil governor’s residence by warship. +In Ambon mußte die politische Spitze des Landes auch noch einen Umweg machen, **um** +mit einem Kriegsschiff die Residenz des Provinzgouverneurs zu erreichen. + + +_•_ Although the European Union has an interest in being surrounded by large, stable regions, +the tools it has available in **order** to achieve this are still very limited. + +Der Europ¨aischen Union ist zwar an großen stabilen Regionen in ihrer Umgebung gelegen, +aber sie verfgt nach wie vor nur ber recht begrenzte Instrumente, **um** das zu erreichen. + + +_•_ We could reasonably expect the new Indonesian government to take action in three fundamental areas: restoring public **order**, prosecuting and punishing those who have blood on +their hands and entering into a political dialogue with the opposition. + +Von der neuen indonesischen Regierung darf man mit Fug und Recht drei elementare Maßnahmen erwarten: die Wiederherstellung der ¨offentlichen **Ordnung**, die Verfolgung und +Bestrafung derjenigen, an deren H¨anden Blut klebt, und die Aufnahme des politischen Dialogs mit den Gegnern. + + +_•_ Firstly, I might mention the fact that the army needs to be reformed, secondly that a stable +system of law and **order** needs to be introduced. + +Ich nenne hier an erster Stelle die notwendige Reform der Armee, ferner die Einfhrung +eines stabilen Systems rechtsstaatlicher **Ordnung** . + + +Multiple meanings of “support”: + + +_•_ Financial **support** is needed to enable poor countries to take part in these court activities. +Arme L¨ander m¨ussen finanziell **unterst¨utzt** werden, damit auch sie sich an der Arbeit des +Gerichtshofs beteiligen k¨onnen. + + +_•_ We must help them and ensure that a proper action plan is implemented to **support** their +work. + +Es gilt einen wirklichen Aktionsplan auf den Weg zu bringen, um die Arbeit dieser Organisationen zu **unterst¨utzen** . + + +_•_ So I hope that you will all **support** this resolution condemning the abominable conditions +of prisoners and civilians in Djibouti. +Ich hoffe daher, daß Sie alle diese Entschließung **bef¨urworten**, die die entsetzlichen Bedingungen von Inhaftierten und Zivilpersonen in Dschibuti verurteilt. + + +_•_ It would be difficult to **support** a subsidy scheme that channelled most of the aid to the +large farms in the best agricultural regions. +Es w¨are auch problematisch, ein Beihilfesystem zu **bef¨urworten**, das die meisten Beihilfen +in die großen Betriebe in den besten landwirtschaftlichen Gebieten lenkt. + + +Multiple meanings of “close”: + + +_•_ This empire did not look for colonies far from home or overseas, like most Western European States, but **close** by. + +Dieses Reich suchte seine Kolonien nicht weit von zu Hause und in bersee wie die meisten +westeurop¨aischen Staaten, sondern in der unmittelbaren **Umgebung** . + + +_•_ In addition, if we are to shut down or refuse investment from every company which may +have an association with the arms industry, then we would have to **close** virtually every +American and Japanese software company on the island of Ireland with catastrophic consequences. + +Wenn wir zudem jedes Unternehmen, das auf irgendeine Weise mit der Rstungsindustrie +verbunden ist, schließen oder Investitionen dieser Unternehmen unterbinden, dann mßten +wir so ziemlich alle amerikanischen und japanischen Softwareunternehmen auf der irischen +Insel **schließen**, was katastrophale Auswirkungen h¨atte. + + +14 + + +Published as a conference paper at ICLR 2020 + + +_•_ On the other hand, the deployment of resources left over in the Structural Funds from the +programme planning period 1994 to 1999 is hardly worth considering as the available funds +have already been allocated to specific measures, in this case in **close** collaboration with +the relevant French authorities. +Die Verwendung verbliebener Mittel der Strukturfonds aus dem Programmplanungszeitraum 1994 bis 1999 ist dagegen kaum in Erw¨agung zu ziehen, da die verfgbaren +Mittel bereits bestimmten Maßnahmen zugewiesen sind, und zwar im konkreten Fall im +**engen** Zusammenwirken mit den zust¨andigen franz¨osischen Beh¨orden. + + +_•_ This is particularly justified given that, as already stated, many Member States have very +**close** relations with Djibouti. +Zumal, wie erw¨ahnt, viele Mitgliedstaaten sehr **enge** Beziehungen zu Dschibuti unterhalten. + + +_•_ Mr President, it is regrettable that, at the **close** of the 20th century, a century symbolised so +positively by the peaceful women’s revolution, there are still countries, such as Kuwait and +Afghanistan, where half the population, women that is, is still denied fundamental human +rights. +Herr Pr¨asident! Es ist wirklich bedauerlich, daß es am **Ende** des 20. Jahrhunderts, eines +so positiv von der friedlichen Revolution der Frauen gepr¨agten Jahrhunderts, noch immer +L¨ander wie Kuwait und Afghanistan gibt, in denen der H¨alfte der Bev¨olkerung, den Frauen, +die elementaren Menschenrechte verweigert werden. + + +15 + + diff --git a/alignment-papers-text/2010.11784_Self-Alignment_Pretraining_for_Biomedical_Entity_R.md b/alignment-papers-text/2010.11784_Self-Alignment_Pretraining_for_Biomedical_Entity_R.md new file mode 100644 index 0000000000000000000000000000000000000000..0504ca760b06032d27b55e37ece22605d4d4b4a4 --- /dev/null +++ b/alignment-papers-text/2010.11784_Self-Alignment_Pretraining_for_Biomedical_Entity_R.md @@ -0,0 +1,1116 @@ +## **Self-Alignment Pretraining for Biomedical Entity Representations** + +**Fangyu Liu** _[♣]_ **, Ehsan Shareghi** _[♦][,][♣]_ **, Zaiqiao Meng** _[♣]_ **, Marco Basaldella** _[♥][∗]_ **, Nigel Collier** _[♣]_ + +_♣_ Language Technology Lab, TAL, University of Cambridge +_♦_ Department of Data Science & AI, Monash University _♥_ Amazon Alexa +_♣_ {fl399, zm324, nhc30}@cam.ac.uk +_♦_ ehsan.shareghi@monash.edu _♥_ mbbasald@amazon.co.uk + + + +**Abstract** + + +main, we achieve SOTA even without taskspecific supervision. With substantial improvement over various domain-specific pretrained +MLMs such as BIOBERT, SCIBERT and PUBMEDBERT, our pretraining scheme proves to +be both effective and robust. [1] + + +**1** **Introduction** + + +Biomedical entity [2] representation is the foundation for a plethora of text mining systems in the +medical domain, facilitating applications such as +literature search (Lee et al., 2016), clinical decision +making (Roberts et al., 2015) and relational knowledge discovery (e.g. chemical-disease, drug-drug +and protein-protein relations, Wang et al. 2018). +The heterogeneous naming of biomedical concepts + + +_∗_ Work conducted prior to joining Amazon. +[1For code and pretrained models, please visit: https:](https://github.com/cambridgeltl/sapbert) +[//github.com/cambridgeltl/sapbert.](https://github.com/cambridgeltl/sapbert) +2In this work, _biomedical entity_ refers to the surface forms +of biomedical concepts, which can be a single word (e.g. +_fever_ ), a compound (e.g. _sars-cov-2_ ) or a short phrase (e.g. +_abnormal retinal vascular development_ ). + + + +poses a major challenge to representation learning. +For instance, the medication _Hydroxychloroquine_ +is often referred to as _Oxichlorochine_ (alternative +name), _HCQ_ (in social media) and _Plaquenil_ (brand +name). +MEL addresses this problem by framing it as +a task of mapping entity mentions to unified concepts in a medical knowledge graph. [3] The main +bottleneck of MEL is the quality of the entity representations (Basaldella et al., 2020). Prior works +in this domain have adopted very sophisticated +text pre-processing heuristics (D’Souza and Ng, +2015; Kim et al., 2019; Ji et al., 2020; Sung et al., +2020) which can hardly cover all the variations +of biomedical names. In parallel, self-supervised +learning has shown tremendous success in NLP via +leveraging the masked language modelling (MLM) + + +3Note that we consider only the biomedical entities themselves and not their contexts, also known as medical concept +normalisation/disambiguation in the BioNLP community. + + +TRON, Shin et al. 2020) have made much progress +in biomedical text mining tasks. Nonetheless, representing medical entities with the existing SOTA +pretrained MLMs (e.g. PUBMEDBERT, Gu et al. +2020) as suggested in Fig. 1 (left) does not lead to +a well-separated representation space. + + +To address the aforementioned issue, we propose +to pretrain a Transformer-based language model on +the biomedical knowledge graph of UMLS (Bodenreider, 2004), the largest interlingua of biomedical +ontologies. UMLS contains a comprehensive collection of biomedical synonyms in various forms +(UMLS 2020AA has 4M+ concepts and 10M+ synonyms which stem from over 150 controlled vocabularies including MeSH, SNOMED CT, RxNorm, +Gene Ontology and OMIM). [4] We design a selfalignment objective that clusters synonyms of the +same concept. To cope with the immense size of +UMLS, we sample hard training pairs from the +knowledge base and use a scalable metric learning +loss. We name our model as **S** elf- **a** ligning **p** retrained **BERT** (SAPBERT). + + +Being both simple and powerful, SAPBERT obtains new SOTA performances across all six MEL +benchmark datasets. In contrast with the current +systems which adopt complex pipelines and hybrid +components (Xu et al., 2020; Ji et al., 2020; Sung +et al., 2020), SAPBERT applies a much simpler +training procedure without requiring any pre- or +post-processing steps. At test time, a simple nearest +neighbour’s search is sufficient for making a prediction. When compared with other domain-specific +pretrained language models (e.g. BIOBERT and +SCIBERT), SAPBERT also brings substantial improvement by up to 20% on accuracy across all +tasks. The effectiveness of the pretraining in SAPBERT is especially highlighted in the scientific language domain where SAPBERT outperforms previous SOTA even without fine-tuning on any MEL +datasets. We also provide insights on pretraining’s +impact across domains and explore pretraining with +fewer model parameters by using a recently introduced ADAPTER module in our training scheme. + + +4 +[https://www.nlm.nih.gov/research/umls/knowledge_](https://www.nlm.nih.gov/research/umls/knowledge_sources/metathesaurus/release/statistics.html) + +[sources/metathesaurus/release/statistics.html](https://www.nlm.nih.gov/research/umls/knowledge_sources/metathesaurus/release/statistics.html) + + + +Figure 2: The distribution of similarity scores for +all sampled PUBMEDBERT representations in a minibatch. The left graph shows the distribution of **+** and **-** +pairs which are easy and already well-separated. The +right graph illustrates larger overlap between the two +groups generated by the online mining step, making +them harder and more informative for learning. + + +**2** **Method: Self-Alignment Pretraining** + + +We design a metric learning framework that learns +to self-align synonymous biomedical entities. The +framework can be used as both pretraining on +UMLS, and fine-tuning on task-specific datasets. +We use an existing BERT model as our starting +point. In the following, we introduce the key components of our framework. + + +**Formal Definition.** Let ( _x, y_ ) _∈X × Y_ denote a tuple of a name and its categorical label. +For the self-alignment pretraining step, _X × Y_ +is the set of all (name, CUI [5] ) pairs in UMLS, +e.g. ( _Remdesivir_, C4726677); while for the finetuning step, it is formed as an entity mention +and its corresponding mapping from the ontology, e.g. ( _scratchy throat_, 102618009). Given +any pair of tuples ( _xi, yi_ ) _,_ ( _xj, yj_ ) _∈X × Y_, the +goal of the self-alignment is to learn a function +_f_ ( _·_ ; _θ_ ) : _X →_ R _[d]_ parameterised by _θ_ . Then, the +similarity _⟨f_ ( _xi_ ) _, f_ ( _xj_ ) _⟩_ (in this work we use cosine similarity) can be used to estimate the resemblance of _xi_ and _xj_ (i.e., high if _xi, xj_ are synonyms and low otherwise). We model _f_ by a BERT +model with its output [CLS] token regarded as the +representation of the input. [6] During the learning, +a sampling procedure selects the informative pairs +of training samples and uses them in the pairwise +metric learning loss function (introduced shortly). + + +**Online Hard Pairs Mining.** We use an online +hard triplet mining condition to find the most + + +5In UMLS, CUI is the **C** oncept **U** nique **I** dentifier. +6We tried multiple strategies including first-token, meanpooling, [CLS] and also NOSPEC (recommended by Vuli´c +et al. 2020) but found no consistent best strategy (optimal +strategy varies on different *BERTs). + + + +2 + + +informative training examples (i.e. hard positive/negative pairs) within a mini-batch for efficient +training, Fig. 2. For biomedical entities, this step +can be particularly useful as most examples can +be easily classified while a small set of very hard +ones cause the most challenge to representation +learning. [7] We start from constructing all possible +triplets for all names within the mini-batch where +each triplet is in the form of ( _xa, xp, xn_ ). Here +_xa_ is called _anchor_, an arbitrary name in the minibatch; _xp_ a positive match of _xa_ (i.e. _ya_ = _yp_ ) and +_xn_ a negative match of _xa_ (i.e. _ya ̸_ = _yn_ ). Among +the constructed triplets, we select out all triplets +that violate the following condition: + + +_∥f_ ( _xa_ ) _−_ _f_ ( _xp_ ) _∥_ 2 _< ∥f_ ( _xa_ ) _−_ _f_ ( _xn_ ) _∥_ 2 + _λ,_ (1) + + +where _λ_ is a pre-set margin. In other words, we +only consider triplets with the negative sample +closer to the positive sample by a margin of _λ_ . +These are the hard triplets as their original representations were very far from correct. Every hard +triplet contributes one hard positive pair ( _xa, xp_ ) +and one hard negative pair ( _xa, xn_ ). We collect +all such positive & negative pairs and denote them +as _P, N_ . A similar but not identical triplet mining condition was used by Schroff et al. (2015) for +face recognition to select hard negative samples. +Switching-off this mining process, causes a drastic +performance drop (see Tab. 2). + + +**Loss Function.** We compute the pairwise cosine +similarity of all the BERT-produced name representations and obtain a similarity matrix **S** _∈_ +R _[|X][b][|×|X][b][|]_ where each entry **S** _ij_ corresponds to the +cosine similarity between the _i_ -th and _j_ -th names in +the mini-batch _b_ . We adapted the Multi-Similarity +loss (MS loss, Wang et al. 2019), a SOTA metric +learning objective on visual recognition, for learning from the positive and negative pairs: + + + + + +_e_ _[−][β]_ [(] **[S]** _[ip][−][ϵ]_ [)][��] + +_p∈Pi_ + + + +_,_ + + + +(2) + + + +1 +_L_ = +_|Xb|_ + + + +_|Xb|_ + + + +_i_ =1 + + ++ [1] + + + + +1 - 1 + _e_ _[α]_ [(] **[S]** _[in][−][ϵ]_ [)][�] +_α_ [log] + +_n∈Ni_ + + + + +1 - 1 + +_α_ [log] + + + +While the first term in Eq. 2 pushes negative +pairs away from each other, the second term pulls +positive pairs together. This dynamic allows for +a re-calibration of the alignment space using the +semantic biases of synonymy relations. The MS +loss leverages similarities among and between positive and negative pairs to re-weight the importance +of the samples. The most informative pairs will +receive more gradient signals during training and +thus can better use the information stored in data. + + +**3** **Experiments and Discussions** + + +**3.1** **Experimental Setups** + + +**Data Preparation Details for UMLS Pretrain-** +**ing.** We download the full release of UMLS +2020AA version. [9] We then extract all English +entries from the MRCONSO.RFF raw file and +convert all entity names into lowercase (duplicates are removed). Besides synonyms defined +in MRCONSO.RFF, we also include tradenames of +drugs as synonyms (extracted from MRREL.RRF). +After pre-processing, a list of 9,712,959 (name, +CUI) entries is obtained. However, random batching on this list can lead to very few (if not none) +positive pairs within a mini-batch. To ensure sufficient positives present in each mini-batch, we generate offline positive pairs in the format of (name1, +name2, CUI) where name1 and name2 have the +same CUI label. This can be achieved by enumerating all possible combinations of synonym pairs +with common CUIs. For balanced training, any +concepts with more than 50 positive pairs are randomly trimmed to 50 pairs. In the end we obtain a +training list with 11,792,953 pairwise entries. + + +**UMLS Pretraining Details.** During training, we +use AdamW (Loshchilov and Hutter, 2018) with +a learning rate of 2e-5 and weight decay rate of +1e-2. Models are trained on the prepared pairwise +UMLS data for 1 epoch (approximately 50k iterations) with a batch size of 512 (i.e., 256 pairs per +mini-batch). We train with Automatic Mixed Precision (AMP) [10] provided in PyTorch 1.7.0. This +takes approximately 5 hours on our machine (configurations specified in App. §B.4). For other hyper + +(Oord et al., 2018), NCA loss (Goldberger et al., 2005), +simple cosine loss (Phan et al., 2019), max-margin triplet +loss (Basaldella et al., 2020) but found our choice is empirically better. See App. §B.2 for comparison. +9 +[https://download.nlm.nih.gov/umls/kss/2020AA/](https://download.nlm.nih.gov/umls/kss/2020AA/umls-2020AA-full.zip) +[umls-2020AA-full.zip](https://download.nlm.nih.gov/umls/kss/2020AA/umls-2020AA-full.zip) + +10 +[https://pytorch.org/docs/stable/amp.html](https://pytorch.org/docs/stable/amp.html) + + + + + - 1 + +_β_ [log] + + + +where _α, β_ are temperature scales; _ϵ_ is an offset +applied on the similarity matrix; _Pi, Ni_ are indices +of positive and negative samples of the _anchor i_ . [8] + + +7Most of _Hydroxychloroquine_ ’s variants are easy: _Hydrox-_ +_ychlorochin_, _Hydroxychloroquine (substance)_, _Hidroxicloro-_ +_quina_, but a few can be very hard: _Plaquenil_ and _HCQ_ . +8We explored several loss functions such as InfoNCE + + + +3 + + +scientific language social media language + + +NCBI BC5CDR-d BC5CDR-c MedMentions AskAPatient COMETA +model + + +@1 @5 @1 @5 @1 @5 @1 @5 @1 @5 @1 @5 + + +vanilla BERT (Devlin et al., 2019) 67.6 77.0 81.4 89.1 79.8 91.2 39.6 60.2 38.2 43.3 40.4 47.7 + +BIOBERT (Lee et al., 2020) 71.3 84.1 79.8 92.3 74.0 90.0 24.2 38.5 41.4 51.5 35.9 46.1 + +BLUEBERT (Peng et al., 2019) 75.7 87.2 83.2 91.0 87.7 94.1 41.6 61.9 41.5 48.5 42.9 52.9 + +CLINICALBERT (Alsentzer et al., 2019) 72.1 84.5 82.7 91.6 75.9 88.5 43.9 54.3 43.1 51.8 40.6 61.8 + +SCIBERT (Beltagy et al., 2019) 85.1 88.4 89.3 92.8 94.2 95.5 42.3 51.9 48.0 54.8 45.8 66.8 + +UMLSBERT (Michalopoulos et al., 2020) 77.0 85.4 85.5 92.5 88.9 94.1 36.1 55.8 44.4 54.5 44.6 53.0 + +PUBMEDBERT (Gu et al., 2020) 77.8 86.9 89.0 93.8 93.0 94.6 43.9 64.7 42.5 49.6 46.8 53.2 ++ SAPBERT 92.0 95.6 93.5 96.0 96.5 98.2 50.8 74.4 70.5 88.9 65.9 77.9 + + +Table 1: **Top** : Comparison of 7 BERT-based models before and after SAPBERT pretraining (+ SAPBERT). All +results in this section are from unsupervised learning (not fine-tuned on task data). The gradient of green indicates + + +the improvement comparing to the base model (the deeper the more). **Bottom** : SAPBERT vs. SOTA results. Blue + +and red denote unsupervised and supervised models. **Bold** and underline denote the best and second best results +in the column. “ _[†]_ ” denotes statistically significant better than supervised SOTA (T-test, _ρ <_ 0 _._ 05). On COMETA, +the results inside the parentheses added the supervised SOTA’s dictionary back-off technique (Basaldella et al., +2020). “-”: not reported in the SOTA paper. “OOM”: out-of-memory (192GB+). + + + +parameters used, please view App. §C.2. + + +**Evaluation Data and Protocol.** We experiment +on 6 different English MEL datasets: 4 in the scientific domain (NCBI, Do˘gan et al. 2014; BC5CDR-c +and BC5CDR-d, Li et al. 2016; MedMentions, Mohan and Li 2018) and 2 in the social media domain +(COMETA, Basaldella et al. 2020 and AskAPatient, Limsopatham and Collier 2016). Descriptions of the datasets and their statistics are provided +in App. §A. We report Acc@1 and Acc@5 (denoted +as @1 and @5) for evaluating performance. In all +experiments, SAPBERT denotes further pretraining +with our self-alignment method on UMLS. At the +test phase, for all SAPBERT models we use nearest neighbour search without further fine-tuning on +task data (unless stated otherwise). Except for numbers reported in previous papers, all results are the +average of five runs with different random seeds. + + +**Fine-Tuning on Task Data.** The red rows in Tab. 1 +are results of models (further) fine-tuned on the +training sets of the six MEL datasets. Similar to +pretraining, a positive pair list is generated through +traversing the combinations of mention and all +ground truth synonyms where mentions are from +the training set and ground truth synonyms are from + + + +the reference ontology. We use the same optimiser +and learning rates but train with a batch size of +256 (to accommodate the memory of 1 GPU). On +scientific language datasets, we train for 3 epochs +while on AskAPatient and COMETA we train for +15 and 10 epochs respectively. For BIOSYN on social media language datasets, we empirically found +that 10 epochs work the best. Other configurations +are the same as the original BIOSYN paper. + + +**3.2** **Main Results and Analysis** + + +***BERT + SAPBERT (Tab. 1, top).** We illustrate +the impact of SAPBERT pretraining over 7 existing BERT-based models (*BERT = {BIOBERT, +PUBMEDBERT, ...}). SAPBERT obtains consistent improvement over all *BERT models across all +datasets, with larger gains (by up to 31.0% absolute +Acc@1 increase) observed in the social media domain. While SCIBERT is the leading model before +applying SAPBERT, PUBMEDBERT+SAPBERT +performs the best afterwards. + + +**SAPBERT vs. SOTA (Tab. 1, bottom).** We take +PUBMEDBERT+SAPBERT (w/wo fine-tuning) and +compare against various published SOTA results +(see App. §C.1 for a full listing of 10 baselines) + + + +4 + + +which all require task supervision. For the scientific language domain, the SOTA is BIOSYN (Sung +et al., 2020). For the social media domain, the +SOTA are Basaldella et al. (2020) and GENRANK (Xu et al., 2020) on COMETA and AskAPatient respectively. All these SOTA methods combine BERT with heuristic modules such as tf-idf, +string matching and information retrieval system +(i.e. Apache Lucene) in a multi-stage manner. + +Measured by Acc@1, SAPBERT achieves new +SOTA with statistical significance on 5 of the 6 +datasets and for the dataset (BC5CDR-c) where +SAPBERT is not significantly better, it performs on +par with SOTA (96.5 vs. 96.6). Interestingly, on scientific language datasets, SAPBERT outperforms +SOTA without any task supervision (fine-tuning +mostly leads to overfitting and performance drops). +On social media language datasets, unsupervised +SAPBERT lags behind supervised SOTA by large +margins, highlighting the well-documented complex nature of social media language (Baldwin +et al., 2013; Limsopatham and Collier, 2015, 2016; +Basaldella et al., 2020; Tutubalina et al., 2020). +However, after fine-tuning on the social media +datasets (using the MS loss introduced earlier), +SAPBERT outperforms SOTA significantly, indicating that knowledge acquired during the selfaligning pretraining can be adapted to a shifted +domain without much effort. + + +**The ADAPTER Variant.** As an option for parameter efficient pretraining, we explore a variant of +SAPBERT using a recently introduced training module named ADAPTER (Houlsby et al., 2019). While +maintaining the same pretraining scheme with the +same SAPBERT online mining + MS loss, instead +of training from the full model of PUBMEDBERT, +we insert new ADAPTER layers between Transformer layers of the fixed PUBMEDBERT, and only +train the weights of these ADAPTER layers. In our +experiments, we use the enhanced ADAPTER configuration by Pfeiffer et al. (2020). We include two +variants where trained parameters are 13.22% and +1.09% of the full SAPBERT variant. The ADAPTER +variant of SAPBERT achieves comparable performance to full-model-tuning in scientific datasets +but lags behind in social media datasets, Tab. 1. The +results indicate that more parameters are needed +in pretraining for knowledge transfer to a shifted +domain, in our case, the social media datasets. + + +**The Impact of Online Mining (Eq. (1)).** As sug + + +gested in Tab. 2, switching off the online hard pairs +mining procedure causes a large performance drop +in @1 and a smaller but still significant drop in @5. +This is due to the presence of many easy and already well-separated samples in the mini-batches. +These uninformative training examples dominated +the gradients and harmed the learning process. + + +configuration @1 @5 + + +Mining switched-on **67.2** **80.3** +Mining switched-off 52.3 _↓_ 14 _._ 9 76.1 _↓_ 4 _._ 2 + + +Table 2: This table compares PUBMEDBERT+SAPBERT’s performance with and without +online hard mining on COMETA (zeroshot general). + + +**Integrating SAPBERT in Existing Systems.** +SAPBERT can be easily inserted into existing +BERT-based MEL systems by initialising the systems with SAPBERT pretrained weights. We use +the SOTA scientific language system, BIOSYN +(originally initialised with BIOBERT weights), as +an example and show the performance is boosted +across all datasets (last two rows, Tab. 1). + + +**4** **Conclusion** + + +We present SAPBERT, a self-alignment pretraining +scheme for learning biomedical entity representations. We highlight the consistent performance +boost achieved by SAPBERT, obtaining new SOTA +in all six widely used MEL benchmarking datasets. +Strikingly, without any fine-tuning on task-specific +labelled data, SAPBERT already outperforms the +previous supervised SOTA (sophisticated hybrid entity linking systems) on multiple datasets in the scientific language domain. Our work opens new avenues to explore for general domain self-alignment +(e.g. by leveraging knowledge graphs such as DBpedia). We plan to incorporate other types of relations (i.e., hypernymy and hyponymy) and extend +our model to sentence-level representation learning. +In particular, our ongoing work using a combination of SAPBERT and ADAPTER is a promising +direction for tackling sentence-level tasks. + + +**Acknowledgements** + + +We thank the three reviewers and the Area Chair +for their insightful comments and suggestions. FL +is supported by Grace & Thomas C.H. Chan Cambridge Scholarship. NC and MB would like to +acknowledge funding from Health Data Research +UK as part of the National Text Analytics project. + + + +5 + + +**References** + + +Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and +[Matthew McDermott. 2019. Publicly available clini-](https://doi.org/10.18653/v1/W19-1909) +[cal BERT embeddings. In](https://doi.org/10.18653/v1/W19-1909) _Proceedings of the 2nd_ +_Clinical Natural Language Processing Workshop_, +pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics. + + +Timothy Baldwin, Paul Cook, Marco Lui, Andrew +MacKinlay, and Li Wang. 2013. [How noisy so-](https://www.aclweb.org/anthology/I13-1041) +[cial media text, how diffrnt social media sources?](https://www.aclweb.org/anthology/I13-1041) +In _Proceedings of the Sixth International Joint Con-_ +_ference on Natural Language Processing (IJCNLP)_, +pages 356–364, Nagoya, Japan. Asian Federation of +Natural Language Processing. + + +Marco Basaldella, Fangyu Liu, Ehsan Shareghi, and +[Nigel Collier. 2020. COMETA: A corpus for med-](https://www.aclweb.org/anthology/2020.emnlp-main.253) +[ical entity linking in the social media. In](https://www.aclweb.org/anthology/2020.emnlp-main.253) _Proceed-_ +_ings of the 2020 Conference on Empirical Methods_ +_in Natural Language Processing (EMNLP)_, pages +3122–3137, Online. Association for Computational +Linguistics. + + +[Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB-](https://doi.org/10.18653/v1/D19-1371) +[ERT: A pretrained language model for scientific text.](https://doi.org/10.18653/v1/D19-1371) +In _Proceedings of the 2019 Conference on Empirical_ +_Methods in Natural Language Processing and the_ +_9th International Joint Conference on Natural Lan-_ +_guage Processing (EMNLP-IJCNLP)_, pages 3615– +3620, Hong Kong, China. Association for Computational Linguistics. + + +Olivier Bodenreider. 2004. [The unified medical lan-](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC308795/pdf/gkh061.pdf) +[guage system (UMLS): integrating biomedical ter-](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC308795/pdf/gkh061.pdf) +[minology.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC308795/pdf/gkh061.pdf) _Nucleic Acids Research_, 32:D267–D270. + + +Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, +Daniela Sciaky, Roy McMorran, Jolene Wiegers, +Thomas C Wiegers, and Carolyn J Mattingly. 2019. +[The comparative toxicogenomics database: update](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6323936/pdf/gky868.pdf) +[2019.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6323936/pdf/gky868.pdf) _Nucleic Acids Research_, 47:D948–D954. + + +Allan Peter Davis, Thomas C Wiegers, Michael C +[Rosenstein, and Carolyn J Mattingly. 2012. MEDIC:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3308155/pdf/bar065.pdf) +[a practical disease vocabulary used at the compara-](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3308155/pdf/bar065.pdf) +[tive toxicogenomics database.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3308155/pdf/bar065.pdf) _Database_ . + + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and +Kristina Toutanova. 2019. [BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423) +[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423) +[standing.](https://doi.org/10.18653/v1/N19-1423) In _Proceedings of the 2019 Conference_ +_of the North American Chapter of the Association_ +_for Computational Linguistics: Human Language_ +_Technologies (NAACL), Volume 1 (Long and Short_ +_Papers)_, pages 4171–4186, Minneapolis, Minnesota. +Association for Computational Linguistics. + + +Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong +[Lu. 2014. NCBI disease corpus: a resource for dis-](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf) +[ease name recognition and concept normalization.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf) +_Journal of Biomedical Informatics_, 47:1–10. + + + +[Kevin Donnelly. 2006. SNOMED-CT: The advanced](https://pubmed.ncbi.nlm.nih.gov/17095826/) +[terminology and coding system for eHealth.](https://pubmed.ncbi.nlm.nih.gov/17095826/) _Studies_ +_in health technology and informatics_, 121:279. + + +[Jennifer D’Souza and Vincent Ng. 2015. Sieve-based](https://doi.org/10.3115/v1/P15-2049) +[entity linking for the biomedical domain.](https://doi.org/10.3115/v1/P15-2049) In _Pro-_ +_ceedings of the 53rd Annual Meeting of the Associ-_ +_ation for Computational Linguistics and the 7th In-_ +_ternational Joint Conference on Natural Language_ +_Processing (ACL-IJCNLP) (Volume 2:_ _Short Pa-_ +_pers)_, pages 297–302, Beijing, China. Association +for Computational Linguistics. + + +Jacob Goldberger, Geoffrey E Hinton, Sam T Roweis, +[and Russ R Salakhutdinov. 2005. Neighbourhood](https://www.cs.toronto.edu/~hinton/absps/nca.pdf) +[components analysis. In](https://www.cs.toronto.edu/~hinton/absps/nca.pdf) _Advances in Neural Infor-_ +_mation Processing Systems_, pages 513–520. + + +Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, +Naoto Usuyama, Xiaodong Liu, Tristan Naumann, +Jianfeng Gao, and Hoifung Poon. 2020. [Domain-](https://arxiv.org/pdf/2007.15779.pdf) +[specific language model pretraining for biomedical](https://arxiv.org/pdf/2007.15779.pdf) +[natural language processing.](https://arxiv.org/pdf/2007.15779.pdf) _arXiv:2007.15779_ . + + +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and +[Ross Girshick. 2020. Momentum contrast for unsu-](https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf) +[pervised visual representation learning. In](https://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf) _Proceed-_ +_ings of the IEEE/CVF Conference on Computer Vi-_ +_sion and Pattern Recognition_, pages 9729–9738. + + +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, +Bruna Morrone, Quentin de Laroussilhe, Andrea +Gesmundo, Mona Attariyan, and Sylvain Gelly. +[2019. Parameter-efficient transfer learning for NLP.](http://proceedings.mlr.press/v97/houlsby19a.html) +In _Proceedings of the 36th International Confer-_ +_ence on Machine Learning, ICML 2019, 9-15 June_ +_2019, Long Beach, California, USA_, volume 97 of +_Proceedings of Machine Learning Research_, pages +2790–2799. PMLR. + + +[Zongcheng Ji, Qiang Wei, and Hua Xu. 2020. BERT-](https://arxiv.org/pdf/1908.03548.pdf) +[based ranking for biomedical entity normalization.](https://arxiv.org/pdf/1908.03548.pdf) +_AMIA Summits on Translational Science Proceed-_ +_ings_, 2020:269. + + +Donghyeon Kim, Jinhyuk Lee, Chan Ho So, Hwisang +Jeon, Minbyul Jeong, Yonghwa Choi, Wonjin Yoon, +[Mujeen Sung,, and Jaewoo Kang. 2019. A neural](https://ieeexplore.ieee.org/document/8730332) +[named entity recognition and multi-type normaliza-](https://ieeexplore.ieee.org/document/8730332) +[tion tool for biomedical text mining.](https://ieeexplore.ieee.org/document/8730332) _IEEE Access_, +7:73729–73740. + + +Robert Leaman and Zhiyong Lu. 2016. [Tag-](https://academic.oup.com/bioinformatics/article/32/18/2839/1744190) +[gerOne: joint named entity recognition and normal-](https://academic.oup.com/bioinformatics/article/32/18/2839/1744190) +[ization with semi-markov models.](https://academic.oup.com/bioinformatics/article/32/18/2839/1744190) _Bioinformatics_, +32:2839–2846. + + +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, +Donghyeon Kim, Sunkyu Kim, Chan Ho So, +and Jaewoo Kang. 2020. [BioBERT: a pre-](https://academic.oup.com/bioinformatics/article/36/4/1234/5566506) +[trained biomedical language representation model](https://academic.oup.com/bioinformatics/article/36/4/1234/5566506) +for [biomedical](https://academic.oup.com/bioinformatics/article/36/4/1234/5566506) text mining. _Bioinformatics_, +36(4):1234–1240. + + + +6 + + +Sunwon Lee, Donghyeon Kim, Kyubum Lee, Jaehoon +Choi, Seongsoon Kim, Minji Jeon, Sangrak Lim, +Donghee Choi, Sunkyu Kim, Aik-Choon Tan, et al. +2016. [BEST: next-generation biomedical entity](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0164680) +[search tool for knowledge discovery from biomed-](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0164680) +[ical literature.](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0164680) _PloS one_, 11:e0164680. + + +Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter +Davis, Carolyn J Mattingly, Thomas C Wiegers, and +[Zhiyong Lu. 2016. BioCreative V CDR task corpus:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/pdf/baw068.pdf) +[a resource for chemical disease relation extraction.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/pdf/baw068.pdf) +_Database_, 2016. + + +Nut Limsopatham and Nigel Collier. 2015. [Adapt-](https://doi.org/10.18653/v1/D15-1194) +[ing phrase-based machine translation to normalise](https://doi.org/10.18653/v1/D15-1194) +[medical terms in social media messages.](https://doi.org/10.18653/v1/D15-1194) In _Pro-_ +_ceedings of the 2015 Conference on Empirical Meth-_ +_ods in Natural Language Processing_, pages 1675– +1680, Lisbon, Portugal. Association for Computational Linguistics. + + +[Nut Limsopatham and Nigel Collier. 2016. Normalis-](https://www.aclweb.org/anthology/P16-1096/) +[ing medical concepts in social media texts by learn-](https://www.aclweb.org/anthology/P16-1096/) +[ing semantic representation. In](https://www.aclweb.org/anthology/P16-1096/) _Proceedings of the_ +_54th Annual Meeting of the Association for Compu-_ +_tational Linguistics_, pages 1014–1023. + + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, +Luke Zettlemoyer, and Veselin Stoyanov. 2019. +[Roberta: A robustly optimized bert pretraining ap-](https://arxiv.org/pdf/1907.11692.pdf) +[proach.](https://arxiv.org/pdf/1907.11692.pdf) _arXiv preprint arXiv:1907.11692_ . + + +Ilya Loshchilov and Frank Hutter. 2018. [Decoupled](https://arxiv.org/pdf/1711.05101.pdf) +[weight decay regularization. In](https://arxiv.org/pdf/1711.05101.pdf) _International Con-_ +_ference on Learning Representations_ . + + +Laurens van der Maaten and Geoffrey Hinton. 2008. + +[Visualizing data using t-SNE.](https://www.jmlr.org/papers/v9/vandermaaten08a.html) _Journal of machine_ +_learning research_, 9(Nov):2579–2605. + + +George Michalopoulos, Yuanxin Wang, Hussam Kaka, +Helen Chen, and Alex Wong. 2020. Umlsbert: Clinical domain knowledge augmentation of +contextual embeddings using the unified medical +language system metathesaurus. _arXiv preprint_ +_arXiv:2010.10391_ . + + +[Sunil Mohan and Donghui Li. 2018. MedMentions: A](https://arxiv.org/pdf/1902.09476.pdf) +[large biomedical corpus annotated with UMLS con-](https://arxiv.org/pdf/1902.09476.pdf) +[cepts. In](https://arxiv.org/pdf/1902.09476.pdf) _Automated Knowledge Base Construction_ . + + +Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016. Deep metric learning via lifted +structured feature embedding. In _Proceedings of the_ +_IEEE Conference on Computer Vision and Pattern_ +_Recognition_, pages 4004–4012. + + +Aaron van den Oord, Yazhe Li, and Oriol Vinyals. +[2018. Representation learning with contrastive pre-](https://arxiv.org/pdf/1807.03748.pdf) +[dictive coding.](https://arxiv.org/pdf/1807.03748.pdf) _arXiv preprint arXiv:1807.03748_ . + + +Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. + +[Transfer learning in biomedical natural language](https://www.aclweb.org/anthology/W19-5006.pdf) +[processing: An evaluation of bert and elmo on ten](https://www.aclweb.org/anthology/W19-5006.pdf) + + + +[benchmarking datasets. In](https://www.aclweb.org/anthology/W19-5006.pdf) _Proceedings of the 2019_ +_Workshop on Biomedical Natural Language Process-_ +_ing_, pages 58–65. + + +Jonas Pfeiffer, Ivan Vuli´c, Iryna Gurevych, and Se[bastian Ruder. 2020. MAD-X: An Adapter-Based](https://www.aclweb.org/anthology/2020.emnlp-main.617) +[Framework for Multi-Task Cross-Lingual Transfer.](https://www.aclweb.org/anthology/2020.emnlp-main.617) +In _Proceedings of the 2020 Conference on Empirical_ +_Methods in Natural Language Processing (EMNLP)_, +pages 7654–7673, Online. Association for Computational Linguistics. + + +[Minh C Phan, Aixin Sun, and Yi Tay. 2019. Robust](https://www.aclweb.org/anthology/P19-1317/) +[representation learning of biomedical names. In](https://www.aclweb.org/anthology/P19-1317/) _Pro-_ +_ceedings of the 57th Annual Meeting of the Asso-_ +_ciation for Computational Linguistics_, pages 3275– +3285. + + +Kirk Roberts, Matthew S Simpson, Ellen M Voorhees, +[and William R Hersh. 2015. Overview of the trec](https://trec.nist.gov/pubs/trec24/papers/Overview-CL.pdf) +[2015 clinical decision support track. In](https://trec.nist.gov/pubs/trec24/papers/Overview-CL.pdf) _TREC_ . + + +Florian Schroff, Dmitry Kalenichenko, and James +Philbin. 2015. [Facenet: A unified embedding for](https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Schroff_FaceNet_A_Unified_2015_CVPR_paper.html) +[face recognition and clustering. In](https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Schroff_FaceNet_A_Unified_2015_CVPR_paper.html) _Proceedings of_ +_the IEEE Conference on Computer Vision and Pat-_ +_tern Recognition_, pages 815–823. + + +Elliot Schumacher, Andriy Mulyar, and Mark Dredze. +2020. Clinical concept linking with contextualized +neural representations. In _Proceedings of the 58th_ +_Annual Meeting of the Association for Computa-_ +_tional Linguistics_, pages 8585–8592. + + +Hoo-Chang Shin, Yang Zhang, Evelina Bakhturina, +Raul Puri, Mostofa Patwary, Mohammad Shoeybi, +and Raghav Mani. 2020. [BioMegatron:](https://www.aclweb.org/anthology/2020.emnlp-main.379) Larger +[biomedical domain language model.](https://www.aclweb.org/anthology/2020.emnlp-main.379) In _Proceed-_ +_ings of the 2020 Conference on Empirical Methods_ +_in Natural Language Processing (EMNLP)_, pages +4700–4706, Online. Association for Computational +Linguistics. + + +Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi +Zhang, Liang Zheng, Zhongdao Wang, and Yichen +Wei. 2020. Circle loss: A unified perspective of +pair similarity optimization. In _Proceedings of the_ +_IEEE/CVF Conference on Computer Vision and Pat-_ +_tern Recognition_, pages 6398–6407. + + +Mujeen Sung, Hwisang Jeon, Jinhyuk Lee, and Jae[woo Kang. 2020. Biomedical entity representations](https://doi.org/10.18653/v1/2020.acl-main.335) +[with synonym marginalization. In](https://doi.org/10.18653/v1/2020.acl-main.335) _Proceedings of_ +_the 58th Annual Meeting of the Association for Com-_ +_putational Linguistics (ACL)_, pages 3641–3650, Online. Association for Computational Linguistics. + + +Elena Tutubalina, Artur Kadurin, and Zulfat Miftahut[dinov. 2020. Fair evaluation in concept normaliza-](https://www.researchgate.net/profile/Elena_Tutubalina/publication/345774709_Fair_Evaluation_in_Concept_Normalization_a_Large-scale_Comparative_Analysis_for_BERT-based_Models/links/5fad803d92851cf7dd18ac70/Fair-Evaluation-in-Concept-Normalization-a-Large-scale-Comparative-Analysis-for-BERT-based-Models.pdf) +[tion: a large-scale comparative analysis for bert-](https://www.researchgate.net/profile/Elena_Tutubalina/publication/345774709_Fair_Evaluation_in_Concept_Normalization_a_Large-scale_Comparative_Analysis_for_BERT-based_Models/links/5fad803d92851cf7dd18ac70/Fair-Evaluation-in-Concept-Normalization-a-Large-scale-Comparative-Analysis-for-BERT-based-Models.pdf) +[based models.](https://www.researchgate.net/profile/Elena_Tutubalina/publication/345774709_Fair_Evaluation_in_Concept_Normalization_a_Large-scale_Comparative_Analysis_for_BERT-based_Models/links/5fad803d92851cf7dd18ac70/Fair-Evaluation-in-Concept-Normalization-a-Large-scale-Comparative-Analysis-for-BERT-based-Models.pdf) In _Proceedings of the 28th Inter-_ +_national Conference on Computational Linguistics_ +_(COLING)_ . + + + +7 + + +Elena Tutubalina, Zulfat Miftahutdinov, Sergey +Nikolenko, and Valentin Malykh. 2018. [Medical](https://www.sciencedirect.com/science/article/pii/S1532046418301126) +[concept normalization in social media posts with](https://www.sciencedirect.com/science/article/pii/S1532046418301126) +[recurrent neural networks.](https://www.sciencedirect.com/science/article/pii/S1532046418301126) _Journal of Biomedical_ +_Informatics_, 84:93–102. + + +Ivan Vuli´c, Edoardo Maria Ponti, Robert Litschko, +[Goran Glavaš, and Anna Korhonen. 2020. Probing](https://www.aclweb.org/anthology/2020.emnlp-main.586) +[pretrained language models for lexical semantics. In](https://www.aclweb.org/anthology/2020.emnlp-main.586) +_Proceedings of the 2020 Conference on Empirical_ +_Methods in Natural Language Processing (EMNLP)_, +pages 7222–7240, Online. Association for Computational Linguistics. + + +Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, +[and Matthew R Scott. 2019. Multi-similarity loss](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) +[with general pair weighting for deep metric learn-](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) +[ing. In](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) _Proceedings of the IEEE Conference on Com-_ +_puter Vision and Pattern Recognition_, pages 5022– +5030. + + +Yanshan Wang, Sijia Liu, Naveed Afzal, Majid +Rastegar-Mojarad, Liwei Wang, Feichen Shen, Paul +[Kingsbury, and Hongfang Liu. 2018. A comparison](https://www.sciencedirect.com/science/article/pii/S1532046418301825) +[of word embeddings for the biomedical natural lan-](https://www.sciencedirect.com/science/article/pii/S1532046418301825) +[guage processing.](https://www.sciencedirect.com/science/article/pii/S1532046418301825) _Journal of Biomedical Informat-_ +_ics_, 87:12–20. + + +Dustin Wright, Yannis Katsis, Raghav Mehta, and +[Chun-Nan Hsu. 2019. Normco: Deep disease nor-](https://openreview.net/forum?id=BJerQWcp6Q) +[malization for biomedical knowledge base construc-](https://openreview.net/forum?id=BJerQWcp6Q) +[tion. In](https://openreview.net/forum?id=BJerQWcp6Q) _Automated Knowledge Base Construction_ . + + +Dongfang Xu, Zeyu Zhang, and Steven Bethard. 2020. + +[A generate-and-rank framework with semantic type](https://www.aclweb.org/anthology/2020.acl-main.748/) +[regularization for biomedical concept normalization.](https://www.aclweb.org/anthology/2020.acl-main.748/) +In _Proceedings of the 58th Annual Meeting of the_ +_Association for Computational Linguistics_, pages +8452–8464. + + +**A** **Evaluation Datasets Details** + + +We divide our experimental datasets into two categories (1) scientific language datasests where the +data is extracted from scientific papers and (2) social media language datasets where the data is coming from social media forums like Reddit.com. +For an overview of the key statistics, see Tab. 3. + + +**A.1** **Scientific Language Datasets** + + +**NCBI disease (Do˘gan et al., 2014)** is a corpus +containing 793 fully annotated PubMed abstracts +and 6,881 mentions. The mentions are mapped +into the MEDIC dictionary (Davis et al., 2012). We +denote this dataset as “NCBI” in our experiments. + + +**BC5CDR (Li et al., 2016)** consists of 1,500 +PubMed articles with 4,409 annotated chemicals, +5,818 diseases and 3,116 chemical-disease interactions. The disease mentions are mapped into the +MEDIC dictionary like the NCBI disease corpus. + + + +The chemical mentions are mapped into the Comparative Toxicogenomics Database (CTD) (Davis +et al., 2019) chemical dictionary. We denote the +disease and chemical mention sets as “BC5CDRd” and “BC5CDR-c” respectively. For NCBI and +BC5CDR we use the same data and evaluation protocol by Sung et al. (2020). [11] + + +**MedMentions (Mohan and Li, 2018)** is a verylarge-scale entity linking dataset containing over +4,000 abstracts and over 350,000 mentions linked +to UMLS 2017AA. According to Mohan and Li +(2018), training TAGGERONE (Leaman and Lu, +2016), a very popular MEL system, on a subset +of MedMentions require >900 GB of RAM. Its +massive number of mentions and more importantly +the used reference ontology (UMLS 2017AA has +3M+ concepts) make the application of most MEL +systems infeasible. However, through our metric +learning formulation, SAPBERT can be applied on +MedMentions with minimal effort. + + +**A.2** **Social-Media Language Datasets** + + +**AskAPatient (Limsopatham and Collier, 2016)** +includes 17,324 adverse drug reaction (ADR) annotations collected from askapatient.com blog +posts. The mentions are mapped to 1,036 medical +concepts grounded onto SNOMED-CT (Donnelly, +2006) and AMT (the Australian Medicines Terminology). For this dataset, we follow the 10-fold +evaluation protocol stated in the original paper. [12] + + +**COMETA (Basaldella et al., 2020)** is a recently +released large-scale MEL dataset that specifically +focuses on MEL in the social media domain, containing around 20k medical mentions extracted +from health-related discussions on reddit.com. +Mentions are mapped to SNOMED-CT. We use the +“stratified (general)” split and follow the evaluation +protocol of the original paper. [13] + + +**B** **Model & Training Details** + + +**B.1** **The Choice of Base Models** + + +We list all the versions of BERT models used in +this study, linking to the specific versions in Tab. 5. +Note that we exhaustively tried all official variants +of the selected models and the best performing ones +are chosen. All BERT models refer to the BERTBase +architecture in this paper. + + +[11https://github.com/dmis-lab/BioSyn](https://github.com/dmis-lab/BioSyn) +[12https://zenodo.org/record/55013](https://zenodo.org/record/55013) +[13https://www.siphs.org/corpus](https://www.siphs.org/corpus) + + + +8 + + +dataset NCBI BC5CDR-d BC5CDR-c MedMentions AskAPAtient COMETA (s.g.) COMETA (z.g.) + + +Ontology MEDIC MEDIC CTD UMLS 2017AA SNOMED & AMT SNOMED SNOMED +_C_ searched ⊊ _C_ ontology? +_|C_ searched _|_ 11,915 11,915 171,203 3,415,665 1,036 350,830 350,830 +_|S_ searched _|_ 71,923 71,923 407,247 14,815,318 1,036 910,823 910,823 +_|M_ train _|_ 5,134 4,182 5,203 282,091 15,665.2 13,489 14,062 +_|M_ validation _|_ 787 4,244 5,347 71,062 792.6 2,176 1,958 +_|M_ test _|_ 960 4,424 5,385 70,405 866.2 4,350 3,995 + + +Table 3: This table contains basic statistics of the MEL datasets used in the study. _C_ denotes the set of concepts; +_S_ denotes the set of all surface forms / synonyms of all concepts in _C_ ; _M_ denotes the set of mentions / queries. +COMETA (s.g.) and (z.g.) are the stratified (general) and zeroshot (general) split respectively. + + +NCBI BC5CDR-d BC5CDR-c MedMentions AskAPatient COMETA +model + +@1 @5 @1 @5 @1 @5 @1 @5 @1 @5 @1 @5 +SIEVE-BASED (D’Souza and Ng, 2015) 84.7 - 84.1 - 90.7 - - WORDCNN (Limsopatham and Collier, 2016) - - - - - - - - 81.4 - - WORDGRU+TF-IDF (Tutubalina et al., 2018) - - - - - - - - 85.7 - - TAGGERONE (Leaman and Lu, 2016) 87.7 - 88.9 - 94.1 - OOM OOM - - - NORMCO (Wright et al., 2019) 87.8 - 88.0 - - - - - - - - BNE (Phan et al., 2019) 87.7 - 90.6 - 95.8 - - - - - - BERTRANK (Ji et al., 2020) 89.1 - - - - - - - - - - GEN-RANK (Xu et al., 2020) - - - - - - - - **87.5** - - BIOSYN (Sung et al., 2020) **91.1 93.9 93.2 96.0** **96.6 97.2** OOM OOM 82.6 _[∗]_ 87.0 _[∗]_ 71.3 _[∗]_ 77.8 _[∗]_ + +DICT+SOILOS+NEURAL (Basaldella et al., 2020) - - - - - - - - - - **79.0** supervised SOTA 91.1 93.9 93.2 96.0 96.6 97.2 OOM OOM 87.5 - 79.0 + +Table 4: A list of baselines on the 6 different MEL datasets, including both scientific and social media language ones. The last +row collects reported numbers from the best performing models. “ _∗_ ” denotes results produced using official released code. “-” +denotes results not reported in the cited paper. “OOM” means out-of-memoery. + + + +**B.2** **Comparing Loss Functions** + + +We use COMETA (zeroshot general) as a benchmark for selecting learning objectives. Note +that this split of COMETA is different from the +stratified-general split used in Tab. 4. It is very +challenging (so easy to see the difference of the +performance) and also does not directly affect the +model’s performance on other datasets. The results +are listed in Tab. 6. Note that online mining is +switched on for all models here. + + +loss @1 @5 + + +cosine loss (Phan et al., 2019) 55.1 64.6 +max-margin triplet loss (Basaldella et al., 2020) 64.6 74.6 +NCA loss (Goldberger et al., 2005) 65.2 77.0 +Lifted-Structure loss (Oh Song et al., 2016) 62.0 72.1 +InfoNCE (Oord et al., 2018; He et al., 2020) 63.3 74.2 +Circle loss (Sun et al., 2020) 66.7 78.7 + + +Multi-Similarity loss (Wang et al., 2019) **67.2 80.3** + + +Table 6: This table compares loss functions used +for SAPBERT pretraining. Numbers reported are on +COMETA (zeroshot general). + + +The cosine loss was used by Phan et al. (2019) +for learning UMLS synonyms for LSTM models. +The max-margin triplet loss was used by Basaldella + + + +et al. (2020) for training MEL models. A very +similar (though not identical) hinge-loss was used +by Schumacher et al. (2020) for clinical concept +linking. InfoNCE has been very popular in selfsupervised learning and contrastive learning (Oord +et al., 2018; He et al., 2020). Lifted-Structure loss +(Oh Song et al., 2016) and NCA loss (Goldberger +et al., 2005) are two very classic metric learning objectives. Multi-Similarity loss (Wang et al., 2019) +and Circle loss (Sun et al., 2020) are two recently +proposed metric learning objectives and have been +considered as SOTA on large-scale visual recognition benchmarks. + + +**B.3** **Details of ADAPTERs** + + +In Tab. 7 we list number of parameters trained in +the three ADAPTER variants along with full-modeltuning for easy comparison. + + + +9 + + +model URL + + +vanilla BERT (Devlin et al., 2019) [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) +BIOBERT (Lee et al., 2020) [https://huggingface.co/dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) +BLUEBERT (Peng et al., 2019) [https://huggingface.co/bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12) +CLINICALBERT (Alsentzer et al., 2019) [https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) +SCIBERT (Beltagy et al., 2019) [https://huggingface.co/allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) +[UMLSBERT (Michalopoulos et al., 2020) https://www.dropbox.com/s/qaoq5gfen69xdcc/umlsbert.tar.xz?dl=0](https://www.dropbox.com/s/qaoq5gfen69xdcc/umlsbert.tar.xz?dl=0) +PUBMEDBERT (Gu et al., 2020) [https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) + + +Table 5: This table lists the URL of models used in this study. + + +#params +method reduction rate #params #params in BERT + + +ADAPTER13% 1 14.47M 13.22% +ADAPTER1% 16 0.60M 1.09% + + +full-model-tuning - 109.48M 100% + + +Table 7: This table compares number of parameters trained in ADAPTER variants and also full-modeltuning. + + +**B.4** **Hardware Configurations** + + +All our experiments are conducted on a server with +specifications listed in Tab. 8. + + +hardware specification + + +RAM 192 GB +CPU Intel Xeon W-2255 @3.70GHz, 10-core 20-threads +GPU NVIDIA GeForce RTX 2080 Ti (11 GB) _×_ 4 + + +Table 8: Hardware specifications of the used machine. + + +**C** **Other Details** + + +**C.1** **The Full Table of Supervised Baseline** +**Models** + + +The full table of supervised baseline models is provided in Tab. 4. + + +**C.2** **Hyper-Parameters Search Scope** + + +Tab. 9 lists hyper-parameter search space for obtaining the set of used numbers. Note that the +chosen hyper-parameters yield the overall best performance but might be sub-optimal on any single +dataset. Also, we balanced the memory limit and +model performance. + + +**C.3** **A High-Resolution Version of Fig. 1** + + +We show a clearer version of t-SNE embedding +visualisation in Fig. 3. + + +10 + + diff --git a/alignment-papers-text/2012.07162_Mask-Align_Self-Supervised_Neural_Word_Alignment.md b/alignment-papers-text/2012.07162_Mask-Align_Self-Supervised_Neural_Word_Alignment.md new file mode 100644 index 0000000000000000000000000000000000000000..bcf547a88fa9fdd06f6f88ceba8dfd51645bf0fe --- /dev/null +++ b/alignment-papers-text/2012.07162_Mask-Align_Self-Supervised_Neural_Word_Alignment.md @@ -0,0 +1,1318 @@ +## **MASK-ALIGN: Self-Supervised Neural Word Alignment** + +**Chi Chen** [1] _[,]_ [3] _[,]_ [4] **, Maosong Sun** [1] _[,]_ [3] _[,]_ [4] _[,]_ [5] **, Yang Liu** _[∗]_ [1] _[,]_ [2] _[,]_ [3] _[,]_ [4] _[,]_ [5] + +1Department of Computer Science and Technology, Tsinghua University, Beijing, China +2Institute for AI Industry Research, Tsinghua University, Beijing, China +3Institute for Artificial Intelligence, Tsinghua University, Beijing, China +4Beijing National Research Center for Information Science and Technology +5Beijing Academy of Artificial Intelligence + + + +**Abstract** + + +Word alignment, which aims to align translationally equivalent words between source and +target sentences, plays an important role in +many natural language processing tasks. Current unsupervised neural alignment methods +focus on inducing alignments from neural machine translation models, which does not leverage the full context in the target sequence. In +this paper, we propose MASK-ALIGN, a selfsupervised word alignment model that takes +advantage of the full context on the target side. +Our model parallelly masks out each target token and predicts it conditioned on both source +and the remaining target tokens. This two-step +process is based on the assumption that the +source token contributing most to recovering +the masked target token should be aligned. +We also introduce an attention variant called +_leaky attention_, which alleviates the problem +of high cross-attention weights on specific tokens such as periods. Experiments on four language pairs show that our model outperforms +previous unsupervised neural aligners and obtains new state-of-the-art results. + + +**1** **Introduction** + + +Word alignment is an important task of finding +the correspondence between words in a sentence +pair (Brown et al., 1993) and used to be a key +component of statistical machine translation (SMT) +(Koehn et al., 2003; Dyer et al., 2013). Although +word alignment is no longer explicitly modeled in +neural machine translation (NMT) (Bahdanau et al., +2015; Vaswani et al., 2017), it is often leveraged to +analyze NMT models (Tu et al., 2016; Ding et al., +2017). Word alignment is also used in many other +scenarios such as imposing lexical constraints on +the decoding process (Arthur et al., 2016; Hasler +et al., 2018), improving automatic post-editing (Pal + + +_∗_ Corresponding author + + + +**Tokyo** + + +Induced alignment link: **Tokio - Tokyo** + + +Figure 1: An example of inducing an alignment link for +target token “Tokyo” in MASK-ALIGN. First, we mask +out “Tokyo” and predict it with source and other target +tokens. Then, the source token “Tokio” that contributes +most to recovering the masked word (highlighted in +red) is chosen to be aligned to “Tokyo”. + + +et al., 2017), and providing guidance for translators +in computer-aided translation (Dagan et al., 1993). + +Compared with statistical methods, neural methods can learn representations end-to-end from raw +data and have been successfully applied to supervised word alignment (Yang et al., 2013; Tamura +et al., 2014). For unsupervised word alignment, +however, previous neural methods fail to significantly exceed their statistical counterparts such +as FAST-ALIGN (Dyer et al., 2013) and GIZA++ +(Och and Ney, 2003). Recently, there is a surge of +interest in NMT-based alignment methods which +take alignments as a by-product of NMT systems +(Li et al., 2019; Garg et al., 2019; Zenkel et al., +2019, 2020; Chen et al., 2020). Using attention +weights or feature importance measures to induce +alignments for to-be-predicted target tokens, these +methods outperform unsupervised statistical aligners like GIZA++ on a variety of language pairs. + +Although NMT-based unsupervised aligners +have proven to be effective, they suffer from two +major limitations. First, due to the autoregressive +property of NMT systems (Sutskever et al., 2014), + + +Alignment Attention Weights - ��� ���� ���� + + + + + + + + + +Leaky Attention + + +|t1|Col2|Col3|Col4|Col5| +|---|---|---|---|---| +|t**2**||||| +|t**3**||||| +|t**4**||||| + + + + + +h **1** h **2** h **3** h **4** + + + + +|Col1|Col2|Col3|Col4| +|---|---|---|---| +|t|t|t|t| +|t|||| +|t|||| + + + + + +Feed Forward + + + +✕ L + + + +L ✕ + + + + + + + + + + + + + +��� ���� ������ ��� - ��� ���� ���� + + +Figure 2: The architecture of MASK-ALIGN. + + + +they only leverage part of the target context. This +inevitably brings noisy alignments when the prediction is ambiguous. Consider the target sentence +in Figure 1. When predicting “Tokyo”, an NMT +system may generate “1968” because future context is not observed, leading to a wrong alignment +link (“1968”, “Tokyo”). Second, they have to incorporate an additional guided alignment loss (Chen +et al., 2016) to outperform GIZA++. This loss requires pseudo alignments of the full training data +to guide the training of the model. Although these +pseudo alignments can be utilized to partially alleviate the problem of ignoring future context, they +are computationally expensive to obtain. + + +In this paper, we propose a self-supervised +model specifically designed for the word alignment +task, namely MASK-ALIGN. Our model parallelly +masks out each target token and recovers it conditioned on the source and other target tokens. Figure 1 shows an example where the target token +“Tokyo” is masked out and re-predicted. Intuitively, +as all source tokens except “Tokio” can find their +counterparts on the target side, “Tokio” should be +aligned to the masked token. Based on this intuition, we assume that the source token contributing +most to recovering a masked target token should be +aligned to that target token. Compared with NMTbased methods, MASK-ALIGN is able to take full +advantage of bidirectional context on the target side +and hopefully achieves higher alignment quality. +We also introduce an attention variant called _leaky_ +_attention_ to reduce the high attention weights on +specific tokens such as periods. By encouraging +agreement between two directional models both +for training and inference, our method consistently + + + +outperforms the state-of-the-art on four language +pairs without using guided alignment loss. + + +**2** **Approach** + + +Figure 2 shows the architecture of our model. The +model predicts each target token conditioned on the +source and other target tokens and generates alignments from the attention weights between source +and target (Section 2.1). Specifically, our approach +introduces two attention variants, _static-KV atten-_ +_tion_ and _leaky attention_, to efficiently obtain attention weights for word alignment. To better utilize +attention weights from two directions, we encourage agreement between two unidirectional models +during both training (Section 2.2) and inference +(Section 2.3). + + +**2.1** **Modeling** + + +Conventional unsupervised neural aligners are +based on NMT models (Peter et al., 2017; Garg +et al., 2019). Given a source sentence **x** = +_x_ 1 _, . . ., xJ_ and a target sentence **y** = _y_ 1 _, . . ., yI_, +NMT models the probability of the target sentence +conditioned on the source sentence: + + + +where **y** _