| [ |
| { |
| "chunk_id": "1d284c3e-43fd-49f1-b36c-391c4975539c", |
| "text": "Learn to Combine Modalities in Multimodal Deep\nLearning Kuan Liu†,‡∗ Yanen Li† Ning Xu†\nkuanl@usc.edu yanen.li@snap.com ning.xu@snap.com2018\nPrem Natarajan‡\npnataraj@isi.edu\n† Snap Research, Snap. Inc., Venice, CAMay ‡ Computer Science Department & Information Sciences Institute, University of Southern California Combining complementary information from multiple modalities is intuitively appealing for improving the performance of learning-based approaches. However,\nit is challenging to fully leverage different modalities due to practical challenges\nsuch as varying levels of noise and conflicts between modalities. Existing meth-[stat.ML] ods do not adopt a joint approach to capturing synergies between the modalities\nwhile simultaneously filtering noise and resolving conflicts on a per sample basis. In this work we propose a novel deep neural network based technique that\nmultiplicatively combines information from different source modalities. Thus the\nmodel training process automatically focuses on information from more reliable\nmodalities while reducing emphasis on the less reliable modalities.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 0, |
| "total_chunks": 49, |
| "char_count": 1109, |
| "word_count": 146, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b07cc4d8-937b-4b61-b916-dd6d688df919", |
| "text": "Furthermore,\nwe propose an extension that multiplicatively combines not only the single-source\nmodalities, but a set of mixtured source modalities to better capture cross-modal\nsignal correlations. We demonstrate the effectiveness of our proposed technique\nby presenting empirical results on three multimodal classification tasks from different domains. The results show consistent accuracy improvements on all three\ntasks. an object, event, or activity of interest.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 1, |
| "total_chunks": 49, |
| "char_count": 466, |
| "word_count": 63, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bccd1b8d-c5fa-4641-b3f1-e5d024a18828", |
| "text": "Therefore, learning-based methods that combine information\nfrom multiple modalities are, in principle, capable of more robust inference. For example, a person's\nvisual appearance and the type of language he uses both carry information about his age. In the context of user profiling in a social network, it helps to predict users' gender and age by modeling both\nusers' profile pictures and their posts. A natural generalization of this idea is to aggregate signals\nfrom all available modalities and build learning models on top of the aggregated information, ideally\nallowing the learning technique to figure out the relative emphases to be placed on different modalities for a specific task. This idea is ubiquitous in existing multimodal techniques, including early\nand late fusion [42, 15], hybrid fusion [1], model ensemble [7], and more recently—joint training\nmethods based on deep neural networks [38, 45, 37]. In these methods, features (or intermediate features) are put together and are jointly modeled to make a decision. We call them additive approaches\ndue to the type of aggregation operation. Intuitively, they are able to gather useful information and\nmake predictions collectively.\n∗Majority of the work in this paper was carried out while the author was affiliated with Snap Research. However, it is practically challenging to learn to combine different modalities. Given multiple input\nmodalities, artifacts such as noise may be a function of the sample as well as the modality; for\nexample, a clear, high-resolution photo may lead to a more confident estimation of age than a lower\nquality photo. Also, either signal noise or classifier vulnerabilities may result in decisions that lead\nto conflicts between modalities. For instance, in the example of user profiling, some users' gender\nand age can be accurately predicted by a clear profile photo, while others with a noisy or otherwise\nunhelpful (e.g., cartoon) profile photo may instead have the most relevant information encoded in\ntheir social network engagement—such as posts and friend interactions, etc.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 2, |
| "total_chunks": 49, |
| "char_count": 2082, |
| "word_count": 323, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "522b144c-baa7-4ce7-bc7a-ff4c29807193", |
| "text": "In such a scenario, we\nrefer to the affected modality, in this case the image modality, as a weak modality. We emphasize\nthat this weakness can be sample-dependent, and is thus not easily controlled with some global bias\nparameters. An ideal algorithm should be robust to the noise from those weak modalities and pick\nout the relevant information from the strong modalities on a per sample basis, while at the same\ntime capturing the possible complementariness among modalities. We would like to point out that the existing additive approaches do not fully address the challenges\nmentioned earlier.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 3, |
| "total_chunks": 49, |
| "char_count": 598, |
| "word_count": 97, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2e4b9d41-f967-4afd-8e2b-8cbd0938d8ca", |
| "text": "Their basic assumptions are 1) every modality is always potentially useful and\nshould be aggregated, and 2) the models (e.g., a neural network) on top of aggregated features can\nbe trained well enough to recover the complex function mapping to a desired output. While in theory\nthe second assumption should hold, i.e., the learned models should be able to determine the quality\nof each modality per sample if given a sufficiently large amount of data. They are, in practice,\ndifficult to train and regularize due to the finiteness of available data. In this work, we propose a new multiplicative multimodal method which explicitly models the fact\nthat on any particular sample not all modalities may be equally useful. The method first makes decisions on each modality independently. Then the multimodal combination is done in a differentiable\nand multiplicative fashion. This multiplicative combination suppresses the cost associated with the\nweak modalities and encourages the discovery of truly important patterns from informative modalities. In this way, on a particular sample, inferences from weak modalities gets suppressed in the\nfinal output. And even more importantly perhaps, they are not forced to generate a correct prediction (from noise!) during training. This accommodation of weak modalities helps to reduce model\noverfitting, especially to noise. As a consequence, the method effectively achieves an automatic\nselection of the more reliable modalities and ignores the less reliable ones. The proposed method is\nalso end-to-end and enables jointly training model components on different modalities. Furthermore, we extend the multiplicative method with the ideas of additive approaches to increase\nmodel capacity. The motivation is that certain unknown mixtures of modalities may be more useful than a good single modality. The new method first creates different mixtures of modalities as\ncandidates, and they make decisions independently.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 4, |
| "total_chunks": 49, |
| "char_count": 1956, |
| "word_count": 295, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ebd82bad-782f-4f99-8c5b-b8b1eeab9908", |
| "text": "Then the multiplicative combination automatically selects more appropriate candidates. In this way, the selection operates on \"modality mixtures\"\ninstead of just a single modality. This mixture-based approach enables structured discovery of the\npossible correlations and complementariness across modalities and increases the model capacity in\nthe first step. A similar selection process applied in the second step ignores irrelevant and/or redundant modality mixtures. This again helps control model complexity and avoid excessive overfitting. We validate our approach on classification tasks in three datasets from different domains: image\nrecognition, physical process classification, and user profiling. Each task provides more than one\nmodality as input. Our methods consistently outperform the existing, state-of-the-art multimodal\nmethods.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 5, |
| "total_chunks": 49, |
| "char_count": 845, |
| "word_count": 111, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ec3a274d-cb4e-4480-ad17-5734627e7f14", |
| "text": "In summary, the key contributions of this paper are as follows: • The multimodal classification problem is considered with a focus on addressing the challenge of weak modalities. • A novel deep learning combination method that automatically selects strong modalities\nper sample and ignores weak modalities is proposed and experimentally evaluated. The\nmethod works with different neural network architectures and is jointly trained in an endto-end fashion. • A novel method to automatically select mixtures of modalities is presented and evaluated. This method increases model capacity to capture possible correlations and complementariness across modalities. Figure 1: Illustration of different deep neural network based multimodal methods. (a) A gender\nprediction example with text (a fake userid) and dense feature (fake profile information) modality\ninputs. (b) Additive combination methods train neural networks on top of aggregate signals from different modalities; Equal errors are back-propagated to different modality models. (c) Multiplicative\ncombination selects a decision from a more reliable modality; Errors back to the weaker modality\nare suppressed. (d) Multiplicative modality mixture combination first additively creates mixture\ncandidates and then selects useful modality mixtures with multiplicative combination procedure. • Experimental evaluations on three real-world datasets from different domains show that the\nnew methods consistently outperform existing multimodal methods. To set the context for our work, we now describe two existing types of popular multimodal approaches in this section. We begin first with notations and then describe traditional approaches,\nfollowed by existing deep learning approaches. Notations We use M to indicate the number of modalities available in total. We denote each\ninput modality/signal as a dense vector vm ∈Rdm, ∀m = 1, 2, .., M. For example, given M =\n3 modalities in the user profiling task, v1 is the profile image represented as a vector, v2 is the\nposted text representation, and v3 encodes the friend network information. We consider a K-way\nclassification setting where y denotes the labels. pkm denotes the prediction probability of the kth\nclass from the mth modality, and pk denotes the model's final prediction probability of the kth\nclass. Throughout the paper, superscripts are used with indices for classes and subscripts used for\nmodalities.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 6, |
| "total_chunks": 49, |
| "char_count": 2423, |
| "word_count": 356, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "38c8032a-b9f0-4499-8848-fce5eb83c331", |
| "text": "2.1 Traditional Approaches Early Fusion Early fusion methods create a joint representation of input features from multiple\nmodalities. Next, a single model is trained to learn the correlation and interactions between low\nlevel features of each modality. We denote the single model as h. The final prediction can be written\np = h([v1, .., vm]), (1) where we use concatenation here as a commonly seen example of jointly representing modality\nfeatures. Early fusion could be seen as an initial attempt to perform multimodal learning. The training pipeline\nis simple as only one model is involved.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 7, |
| "total_chunks": 49, |
| "char_count": 593, |
| "word_count": 95, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1c6764ab-fa10-4bfe-9877-9233f4fc7260", |
| "text": "It usually requires the features from different modalities\nto be highly engineered and preprocessed so that they align well or are similar in their semantics. Furthermore, it uses one single model to make predictions, which assumes that the model is well\nsuited for all the modalities. Late Fusion Late fusion uses unimodal decision values and fuses them with a fusion mechanism\nF (such as averaging [41], voting [35], or a learned model [11, 40].) Suppose model hi is used on\nmodality i (i = 1, .., M,) the final prediction is p = F(h1(v1), ..., hm(vm)) (2)", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 8, |
| "total_chunks": 49, |
| "char_count": 558, |
| "word_count": 97, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3e3c9875-d80f-460f-9734-a33fb7647638", |
| "text": "Late fusion allows the use of different models on different modalities, thus allowing more flexibility. It is easier to handle a missing modality as the predictions are made separately. However, because\nlate fusion operates on inferences and not the raw inputs, it is not effective at modeling signal-level\ninteractions between modalities. 2.2 Multimodal Deep Learning Due to the superior performance and computationally tractable representation capability (in vector spaces) in multiple domains such as visual, audio, and text, deep neural networks have gained\ntremendous popularity in multimodal learning tasks [38, 39, 44]. Typically, domain-specific neural networks are used on different modalities to generate their representations and the individual\nrepresentations are merged or aggregated. Finally, the prediction is made on top of aggregated representation usually with another a neural network to capture the interactions between modalities and\nlearn complex function mapping between input and output. Addition (or average) and concatenation\nare two common aggregation methods, i.e.,", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 9, |
| "total_chunks": 49, |
| "char_count": 1093, |
| "word_count": 154, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5f991bf9-90fe-4938-ae14-d3a1219bcd12", |
| "text": "u = fm(vm) (3)\nu = [f1(v1), .., f1(vm)] (4)\nwhere f is considered a domain specific neuralP network and fm : Rdm →Rd(m = 1, .., M). Given\nthe combined vector output u ∈Rd (or R dm), another network g computes the final output. p = g(u) where g : Rd →RK (5)", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 10, |
| "total_chunks": 49, |
| "char_count": 256, |
| "word_count": 54, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "46aa45bc-a3f6-45cb-b5cc-75c9c6fd3040", |
| "text": "The network structure is illustrated in Figure 1(b) . The arrows are function mappings or computing\noperations. The dotted boxes are representations of single and combined modality features. We call\nthem additive combinations because their critical step is to add modality hidden vectors (although\noften in a nonlinear way). In Section 5, we present related work in areas such as learning joint multimodal representations\nusing a shared semantic space. Those approaches are not directly applicable to our task where we\nare aim to predict latent attributes, not merely the observed identities of the sample. 3 A multiplicative combination layer The additive approaches discussed above make no assumptions regarding the reliability of different\nmodality inputs.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 11, |
| "total_chunks": 49, |
| "char_count": 759, |
| "word_count": 114, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "66384baa-bc33-4ec2-b772-ebb158ff9791", |
| "text": "As such, their performance critically relies on the single network g to figure out\nthe relative emphases to be placed on different modalities. From a modeling perspective, the aim\nis to recover the function mapping between the combined representation u and the desired outputs. This function can be complex in real scenarios. For instance, when the signals are similar or complementary to each other, g is supposed to merge them to make a strengthened decision; when signals\nconflict with each other, g should filter out the unreliable ones and make a decision based primarily\non more reliable modalities. While in theory g—often parameterized as a deep neural network—has\nthe capability to recover an arbitrary function given a sufficiently large amount of data (essentially,\nunlimited), it can be, in practice, very difficult to train and regularize given data constraints in real\napplications. As a result, model performance degrades significantly. Our aim is to design a more (statistically) efficient method by explicitly assuming that some modalities are not as informative as others on a particular sample.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 12, |
| "total_chunks": 49, |
| "char_count": 1113, |
| "word_count": 174, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ba9421cb-307b-4618-8a76-68fcd03ff1cd", |
| "text": "As a result, they should not be fed into\na single network for training. Intuitively, it is easier to train a model on the input of a good modality\nrather than a mix of good ones and bad ones. Here we differentiate modalities to be informative modalities (good) and weak modalities (bad). Note that the labels informative and noisy are applied\nin respect to each particular sample. To begin, let every modality make its own independent decision with its modal-specific model (e.g.,\npi = gi(vi).) Their decisions are combined by taking an average. Specifically, we have the following objective function,", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 13, |
| "total_chunks": 49, |
| "char_count": 601, |
| "word_count": 100, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d61113bb-2623-437c-95b0-fa8ba40f9fb6", |
| "text": "Lce = ℓyce, ℓyce = − log pyi (6)\ni=1\nwhere y denotes the true class index, and we call ℓy a class loss as it is part of the loss function\nassociated with a particular class. In the testing stage, the model predicts the class with the smallest\nclass loss, i.e., ˆy = arg min ℓyce. (7)", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 14, |
| "total_chunks": 49, |
| "char_count": 283, |
| "word_count": 58, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "891a14bb-1fb4-4910-bb47-017268f88380", |
| "text": "This relatively standard approach allows us to train one model per modality. However, when weak\nmodalities exist, the objective (6) would significantly increase. By minimizing (6), it forces every\nmodel based on its modality to perform well on the training data. This could lead to severe overfitting\nas the noisy modality simply does not contain the information required to make a correct prediction,\nbut the loss function penalizes it heavily for incorrect predictions.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 15, |
| "total_chunks": 49, |
| "char_count": 471, |
| "word_count": 73, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1587047a-39f6-450f-8dea-91f89db616ae", |
| "text": "3.1 Combine in a multiplicative way To mitigate against the problem of overfitting, we propose a mechanism to suppress the penalty\nincurred on noisy signals from certain modalities. A cost on a modality is down-weighted when there\nexists other good modalites for this example. Specifically, a modality is good (or bad) when it assigns\na high (or low) probability to the correct class. A higher probability indicates more informative\nsignals and stronger confidence. With that in mind, we design a down-weighting factor as follows,", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 16, |
| "total_chunks": 49, |
| "char_count": 530, |
| "word_count": 84, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6fa7fa81-5027-433c-a632-dda5799dcc86", |
| "text": "qi = [ (1 −pj)]β/(M−1) (8) where we omit the class index superscripts on p and q for brevity; β is a hyper parameter to control\nthe strength of down-weighting and are chosen by cross-validation. Then the new training criterion\nbecomes\nLmul = ℓymul, ℓymul = − qyi log pyi . (9)\ni=1\nThe scaling factor [Qj̸=i(1−pj)]β/(M−1) represents the average prediction quality of the remaining\nmodalities. This term is close to 0 when some pj are close to 1. When those modalities (j ̸= i) have\nconfident predictions on the correct class, the term would have a small value, thus suppressing the\ncost on the current modality (pi). Intuitively, when other modalities are already good, the current\nmodality (pi) does not have to be equally good. This down-weighting reduces the training requirement on all modalities and reduces overfitting. [24] uses this term to ensemble different layers of a\nconvolution network in image recognition task.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 17, |
| "total_chunks": 49, |
| "char_count": 925, |
| "word_count": 154, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "710edd09-161d-4133-8564-91abda2b5280", |
| "text": "We introduce important hyper-parameter β to control the strength of these factors. Larger values give a stronger suppressing effect and vice versa. During the testing, we follow a similar criterion in (7) (replacing ℓce with ℓmul.) We call this strategy a multiplicative combination due to the use of multiplicative operations in (8).", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 18, |
| "total_chunks": 49, |
| "char_count": 334, |
| "word_count": 52, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e19b2ed5-6584-45a1-95ac-3f20d315d0fe", |
| "text": "During the training, the process always tries to select some modalities that give the correct prediction\nand tolerate mistakes made by other modalities. This tolerance encourages each modality to work\nbest in its own areas instead of on all examples. We emphasize that β implements a trade off between ensemble and non-smoothed multiplicative\ncombination. When β=0, we have q = 1.0 and predictions from different modalities are averaged;\nwhen β=1, there is no smoothing at all on (1 −pj) terms so that a good modality will strongly down\nweight losses from other modalities.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 19, |
| "total_chunks": 49, |
| "char_count": 573, |
| "word_count": 93, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c619cfe9-0538-4f62-bbfe-d7f1dc74546f", |
| "text": "The proposed combination can be implemented as the last layer of a combination neural network as\nit is differentiable. Errors in (9) can be back-propagated to different components of the model such\nthat the model can be trained jointly. 3.2 Boosted multiplicative training Despite it providing a mechanism to selectively combine good and bad modalities, the multiplicative\nlayer as configured above has some limitations. Specifically, there is a mismatch between minimizing\nthe objective function and maximizing the desired accuracy. To illustrate that, we take one step back\nand look at the standard cross entropy objective function (6) (then M = 1). We have exp(−ℓ1) +\nexp(−ℓ2) = p1+p2 = 1 when Y = 2. Let's call it normalized.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 20, |
| "total_chunks": 49, |
| "char_count": 729, |
| "word_count": 118, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "67a8de6b-79d5-4bbe-87a3-557560b9ca06", |
| "text": "It makes intuitive sense to minimize\nonly ℓy in the training phase so that we have ℓy < ℓy′(y′ ̸= y)—thus maximizing the accuracy. However, if we look at (9), the same normalization does not apply any more due to the complication\nof multiple modalities (M > 1) and the introduction of the down-weighting factors qi. Therefore,\nit does not guarantee that minimizing ℓ1 leads to driving ℓ1 < ℓ2 or vice versa. There are two\nimportant consequences of this mismatch. First, the method may stop minimizing the class losses\non the correct classes when it is still incorrect. Second, it may work on reducing the class losses that\nalready have correct predictions. A tempting naive approach When addressing the issue, a temptive approach is to normalize the\nclass losses similar to normalizing a probability vector. A deeper consideration reveals the pitfall\ninherent in that temptation—normalizing class losses does not make sense because the class losses\nin an objective function are error surrogates which usually serve as the upper bound of the training\nerrors. While it makes sense to minimize the surrogates on the correct classes, but it is pointless,\nperhaps counterproductive,to maximize the losses on the wrong classes. What regular normalization\ntechniques do is to maximize the gap between losses in the correct and wrong classes—effectively\nminimizing the correct ones and maximizing the wrong ones. Experimental results validate the\nanalysis presented above. Boosting extension We propose a modification of the objective function in (9) to address the issue. Rather than always placing a loss on the correct class, we place a penalty only when the class\nloss values are not the smallest among all the classes.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 21, |
| "total_chunks": 49, |
| "char_count": 1715, |
| "word_count": 277, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fd217906-e6cf-4a96-829e-4504c78ec2f6", |
| "text": "This creates a connection to the prediction\nmechanism in (7). If the prediction is correct, there is no need to further reduce the class loss on\nthat instance; if the prediction is wrong, the class loss should be reduced even if the loss value is\nalready relatively small. To increase the robustness, we add a margin formulation where the loss on\nthe correct class should be smaller by a margin. Thus, the objective we use is as follows,", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 22, |
| "total_chunks": 49, |
| "char_count": 437, |
| "word_count": 78, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "801acbba-c1e6-473e-9ae0-a24eb26ed331", |
| "text": "L = ℓy(1 − 1(ℓymul + δ < ℓy′mul)), (10)\n∀y′̸=y where the bracket part in the right-hand side of (10) computes whether the loss associated with the\ncorrect class is the smallest (by a margin). The margin δ is chosen in experiment by cross validation.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 23, |
| "total_chunks": 49, |
| "char_count": 249, |
| "word_count": 46, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "718bb620-29ff-4b33-a887-b081bbae3a37", |
| "text": "The new objective function only aims to minimize the class losses which still need improvement. For those examples that already have correct classification, the loss is counted as zero. Therefore,\nthe objective only adjusts the losses that lead to wrong prediction. It makes model training and\ndesired prediction accuracy better aligned. Boosting connection There is a clear connection between the new objective (10) and boosting ideas\nif we consider the examples where (7) makes wrong predictions as hard examples and others as\neasy examples. The objective (10) looks at only the hard examples and directs efforts to improve\nthe losses. The hard examples change during the training process, and the algorithm adapts to that. Therefore, we call the new training criterion boosted multiplicative training. 4 Select modality mixtures The multiplicative combination layer explicitly assumes modalities are noisy and automatically select good modalities. One limitation is that the models gi (i = 1, .., M) are trained primarily based\non a single modality (although they do receive back-propagated errors from the other modalities\nthrough joint training). This prevents the method from fully capturing synergies across modalities.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 24, |
| "total_chunks": 49, |
| "char_count": 1226, |
| "word_count": 185, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4a687af3-a405-43a1-bcef-c98b5a553565", |
| "text": "In a twitter example, a user's follower network and followee network are two modalities that are\ndifferent but closely related. They jointly contribute to predictions concerning the user's interests,\netc. The multiplicative combination in Section 3 would not be ideal in capturing such correlations. On the other hand, additive methods are able to capture model correlation more easily by design\n(although they do not explicitly handle modality noise and conflicts). 4.1 Modality mixture candidates Given the complementary qualities of the additive and multiplicative approaches, it is desirable to\nharness the advantages of both. To achieve that goal, we propose a new method. At a high level, we\nwant our methods to first have the capability to capture all possible interactions between different\nmodalities and then to filter out noises and pick useful signals. In order to be able to model interactions of different modalities, we first create different mixtures\nof modalities. Particularly, we enumerate all possible mixtures from the power set of the set of\nmodality features. On each mixture, we apply the additive operation to extract higher-level feature\nrepresentations as follows, k∈Mc;Mc⊂{1,2,..,M}\nwhere Mc contains one or more modalities. Thus we have uc as the representation of the mixture\nof modalities in set Mc. It gathers signals from all the modalities in Mc. Since there are 2M −1\ndifferent non-empty Mc, there are 2M −1 uc, and each uc looks into the mix of a different modality\nmixture.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 25, |
| "total_chunks": 49, |
| "char_count": 1510, |
| "word_count": 239, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "777f0314-4c12-48f1-aa88-d6a6c6870402", |
| "text": "We call each uc a mixture candidate as we believe not every mixture is equally useful;\nsome mixtures may be very helpful to model training while others could even be harmful. Given the generated candidates, we make predictions based on each of them independently. Concretely, as in the additive approach, a neural network is used to make prediction pc as follows,\npc = gc(uc), (12)\n. where pc is the prediction result from an individual mixture. Different pc may not agree with each\nother. It remains to have a mechanism to select which one to believe or how to combine pc. 4.2 Mixture selections Among the combination candidates generated above, it is not clear which mixtures are strong and\nwhich are weak due to the way of enumerating proposals. One simple way is to average predictions\nfrom all candidates. However, it loses the ability to discriminate between different modalities and\nagain takes modalities as equally useful. From the modeling perspective, it is similar to simply\ndoing additive approaches to modalities in the beginning.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 26, |
| "total_chunks": 49, |
| "char_count": 1044, |
| "word_count": 173, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0e58c67f-57da-4ea6-bb1e-ddad89ad1b54", |
| "text": "Our goal is to automatically select strong\ncandidates and ignore weak ones. To achieve that, we apply the multiplicative combination layer (9) in Section 3 to the selection of\nmixture candidates in (12), i.e., |Mc|X\nℓy = − qyc log pyc (13)\nc=1\nwhere qc is defined similarly. Equation (13) follows (9) except that each model here is based on a\nmixture candidate instead of a single modality. With (10) (11) (12) (13) our method pipeline can be illustrated in Fig. 1(d). It first additively\ncreates modality mixture candidates. Such candidates can be features from one single modality and\ncan also be mixed features from multiple modalities. These candidates by design make it more\nstraightforward to consider signal correlation and complementariness across modalities. However,\nit is unknown which candidate is good for an example. Some candidates can be redundant and noisy. The method then combines the prediction of different mixtures multiplicatively. The multiplicative\nlayer enables candidate selection in an automatic way where strong candidates are picked while\nweak ones are ignored without dramatically increasing the entire objective function. As a whole, the\nmodel is able to pick the most useful modalities and modality mixtures with respect to our prediction\ntask. Table 1: Datasets and modalities. Datasets Modalities\nCIFAR100 features output from 3 ResNet units\nHIGGS low-level, high-level features\ngender first name, userid, engagement", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 27, |
| "total_chunks": 49, |
| "char_count": 1451, |
| "word_count": 223, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bc58de2d-8256-4b6b-9a2f-1273246dfede", |
| "text": "Multimodal learning Traditional multimodal learning methods include early fusion (i.e., feature\nbased), late fusion (i.e., decision based), and hybrid fusion [1]. They also include model based\nfusion such as multiple kernel learning [13, 4, 9], graphical model based approaches[36, 10, 16],\netc. Deep neural networks are very actively explored in multimodal fusion [38]. They have been used\nto fuse information for audio-visual emotion classification [45], gesture recognition [37], affect\nanalysis [25], and video description generation [23]. While the modalities used, architectures, and\noptimization techniques might differ, the general idea of fusing information in joint hidden layer of\na neural network remains the same. Multiplicative combination technique Multiplicative combination is widely explored in machine\nlearning methods. [6] uses an OR graphical model to combine similarity probabilities across different feature components. The probabilities of dissimilarity between pairs of objects go through\nmultiplication to generate the final probability of be dissimilar, thus picking out the most optimistic\ncomponent. [24] ensembles multiple layers of a convolution network with a down-weighting objective function which is a specialized instance of our (8).", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 28, |
| "total_chunks": 49, |
| "char_count": 1269, |
| "word_count": 176, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3b81836d-b7c9-4c46-aa5d-a570e96f4712", |
| "text": "Our objective is more general and flexible. Furthermore, we developed boosted training strategy and modality combination to address multimodal classification challenges. [30] develops focal loss to address class imbalance issue. In its\nsingle modality setting, it down-weights every class loss by one minus its own probability.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 29, |
| "total_chunks": 49, |
| "char_count": 327, |
| "word_count": 46, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4c325229-489d-4bbc-93e2-17fbcf82e02a", |
| "text": "Attention techniques [2, 5, 33] are also treated as multiplicative methods to combine multiple modalities. Features from different modalities are dynamically weighted before mixed together. The multiplicative operation is performed at the feature level instead of decision level. Other multimodal tasks. There are other multimodal tasks where the ultimate task is not classification. These include such various image captioning tasks. In [43] a CNN image representation is\ndecoded using an LSTM language model. In [22], gLSTM incorporates the image data together with\nsentence decoding at every time step fusing visual and sentence data in a joint representation. Joint\nmultimodal representation learning is also used for visual and media question answering [8, 32, 46],\nvisual integrity assessment [21], and personalized recommendation [19, 31]. We validate our methods on three datasets from different domains: image recognition, physical\nprocess classification, and user profiling. On these tasks, we are given more than one modality\ninputs and try to best use these modalities to achieve good generalization performance. Our code is\npublicly available2.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 30, |
| "total_chunks": 49, |
| "char_count": 1157, |
| "word_count": 168, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3656ac5e-09b3-46b7-bd98-b92e6075bd52", |
| "text": "6.1.1 CIFAR-100 image recognition The CIFAR-100 dataset [28] contains 50,000 and 10,000 color images with size of 32 × 32 from\n100 classes for training and testing purposes, respectively. As observed by [47, 17], different layers\nof a convolutional neural network (CNN) contain different signals of an image (different abstraction\nlevels) that may be useful for classification on different examples. [24] takes three layers of networks\nin networks (NINs) [29] and demonstrates recognition accuracy improvements. In our experiments,\nthe features from three different layers of a CNN are regarded as three different modalities. 2https://github.com/skywaLKer518/MultiplicativeMultimodal", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 31, |
| "total_chunks": 49, |
| "char_count": 683, |
| "word_count": 94, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "18658eee-302c-463d-949a-fc0ca47c59fc", |
| "text": "Network architecture We use Resnet [18] as the network in our experiments as it significantly\noutperforms NINs on this task and meanwhile Resnet also has the block structure so that it makes\nour choice of modality easier and more natural. We experimented with network architecture Resnet-\n32, Resnet-110. On both networks there are three residual units. We take the hidden states of the\nthree units as modalities. We follow [24] and weight the losses of different layers by (0.3, 0.3, 1.0). Our implementations are based on [12]. Methods We experimented different methods: (1) Vanilla Resnet (\"Base\")[12] predicts the image\nclass only based on the last layer output; i.e., there is only one modality. (2) Resnet-Add (\"Add\")\nconcatenates the hidden nodes of three layers and builds fully connected neural networks (FCNNs)\non top of the concatenated features. We tuned the network structure and found a two layer network\nwith hidden 256 nodes gives the best result. (3) Resnet-Mul (\"Mul\") multiplicatively combines predictions from the hidden nodes of three layers. (4) Resnet-MulMix (\"MulMix\") uses multiplicative\nmodality mixture combination on the three hidden layers. It uses default β value 0.5. (5) ResnetMulMix* (\"MulMix*\") is the same as MulMix except β is tuned between 0 and 1.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 32, |
| "total_chunks": 49, |
| "char_count": 1285, |
| "word_count": 203, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "69a84efc-77a9-4e6a-84e5-4fe291665496", |
| "text": "Training details We strictly follow [12] to train Resnet and all other models. Specifically, we use\nSGD momentum with a fixed schedule learning rate {0.1, 0.01, 0.001} and terminate at 80000\niterations. We use 100 as batch size and choose weight decay 0.0002 on all network weights.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 33, |
| "total_chunks": 49, |
| "char_count": 282, |
| "word_count": 47, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "223160ad-c774-44ea-b740-6413762ee916", |
| "text": "6.1.2 HIGGS classification HIGGS [3] is a binary classification problem to distinguish between a signal process which produces\nHiggs bosons and a background process which does not. The data has been produced using Monte\nCarlo simulations.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 34, |
| "total_chunks": 49, |
| "char_count": 238, |
| "word_count": 36, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4c4c874a-3e18-4da0-a626-1b2020242600", |
| "text": "We have two feature modalities—low level and high level features. Low level\nfeatures are 21 features which are kinematic properties measured by the particle detectors in the\naccelerator. High level features are anothor 7 features that are functions of the first 21 features; they\nare derived by physicists to help discriminate between the two classes. Details of feature names are\nin the original paper [3]. We follow the setup in [3] and use the last 500,000 examples as a test set. \"HIGGS-small\" and \"HIGGS-full\". To investigate the algorithm behaviors under different data\nscales, we also randomly down-sample 1/3 of the examples from the entire training split. This\ncreates another subset which we call \"HIGGS-Small.\" Network architecture We use feed-forward deep networks on each modality. We follow [3] to use\n300 hidden nodes in our networks and tried different number of layers. L2 weight decay is used with\ncoefficient 0.00002. Dropout is not used as it hurts the performance in our experiments. Network\nweights are randomly initialized and SGD with 0.9 momentum is used during the training.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 35, |
| "total_chunks": 49, |
| "char_count": 1100, |
| "word_count": 176, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "31b7412f-59f7-4be0-8a8e-15615cf0c9d3", |
| "text": "Methods We experimented with single modality prediction, late fusion of two modalities, and\nmodality combination methods similar to what we describe above in CIFAR100. 6.1.3 Gender classification Gender The dataset we use contains 7.5 million users from Snapchat app, with registered userid and\nuser activity logs (e.g., story posts, friend networks, message sent, etc.) It also contains the inferred\nuser first names produced by an internal tool. The task is to predict user genders. Users' inputs in\nBitmoji app are used as the ground truth. We randomly shuffle the data and use 6 million samples\nas training, 500K as development, and 1 million for testing.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 36, |
| "total_chunks": 49, |
| "char_count": 659, |
| "word_count": 105, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "72ce87e0-0728-49e4-92b3-a6cf7582575b", |
| "text": "There are three modalities in this dataset: userid as a short text, inferred user first name as a letter\nstring, and dense features extracted from user activities (e.g., the count of message sent, the number\nof male or female friends, the count of stories posted.) Gender-6 and Gender-22 We experimented with two versions of the dataset. The versions differed\nin the richness of user activity features. The first one has 6 highly engineered features (gender-6)\nand the other has 22 features (gender-22). Network architecture We use FCNNs to model dense features. We tune the architecture and eventually use a 2 layer network with 2000 hidden nodes. We use (character based) Long Short-Term\nMemory Networks (LSTMs) [20] to model text string. The text string is fed into the networks one\ncharacter by one character and the hidden representation of the last character is connected with FCNNs to predict the gender. We find the vanilla single layer LSTMs outperforms or matches other Table 2: Test error rates/AUC comparisons on CIFAR100, HIGGS, and gender tasks.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 37, |
| "total_chunks": 49, |
| "char_count": 1059, |
| "word_count": 172, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fdd00a72-f68f-46cf-b497-34b94e6b28b0", |
| "text": "MulMix uses\ndefault β value 0.5. MulMix* tunes β between 0 and 1. Experimental results are from 5 random\nruns. The best and second best results in each row are in bold and italic, respectively. Base Fuse Add [38] Mul Mulmix Mulmix*\ncifar100, resnet-32\n30.3 ± 0.2\nErr - 29.4 ± 0.4 29.3 ± 0.2 27.8 ± 0.3 27.3 ± 0.4\n30.0 [12]\ncifar100, resnet110\n26.5 ± 0.3\nErr - 27.2 ± 0.4 25.3 ± 0.4 25.1 ± 0.2 24.7 ± 0.3\n26.4 [12]\nhiggs-small\nErr 23.3 22.5 22.3 ± 0.1 21.8 ± 0.1 21.4 ± 0.1 21.2 ± 0.1\nAUC 84.8 85.9 86.2 ± 0.1 86.5 ± 0.1 87.1 ± 0.1 87.2 ± 0.1\nhiggs-full\nErr 21.7 20.6 20.0 ± 0.1 20.1 ± 0.1 19.6 ± 0.2 19.4 ± 0.1\n88.6 ± 0.1\nAUC 86.6 88.0 88.3 ± 0.1 88.8 ± 0.2 89.1 ± 0.1\n88.5 [3]\ngender-6\nErr 15.4 7.97 6.07 ± 0.02 6.05 ± 0.02 5.90 ± 0.02 5.86 ± 0.02\ngender-22\nErr 10.1 5.15 3.85 ± 0.03 3.83 ± 0.03 3.70 ± 0.02 3.66 ± 0.01 variants including (multi-layer, bidirectional [14], attention-based LSTMs). We believe it is due to\nthe fact that we have sufficiently large amount of data.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 38, |
| "total_chunks": 49, |
| "char_count": 978, |
| "word_count": 203, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f0728fec-06ea-4a31-905a-13261825aead", |
| "text": "We also experimented character based Convolutional Neural Networks (char-CNN) [26, 48] and CNNs+LSTMs for text modeling and found\nLSTMs perform slightly better. Training details: our tuned LSTMs has 1 layer with hidden size 512. It uses ADAM [27] to train with\nlearning rate 0.0001 and learning rate decay 0.99. Gradients are truncated at 5.0. We stop model\ntraining when there is no improvement on the development set for consecutive 15 evaluations.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 39, |
| "total_chunks": 49, |
| "char_count": 450, |
| "word_count": 71, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "28206713-a880-4a9c-8cc8-194aafd0041b", |
| "text": "Methods In addition to methods described in CIFAR100 and HIGGS, we also experimented with\nan attention based combination methods [34]. 6.2.1 Accuracy comparisons The test error and AUC comparisons are reported in Table 2. CIFAR100 Compared to the vanilla Resnet model (Base), additive modality combination (Add)\ndoes not necessarily help improve test error. Particularly, it helps Resnet-32 but not Resnet-110. It\nmight be due to overfitting on Resnet-110 as there is already much more parameters. Multiplicative training (Mul), on the contrary, helps reduce error rates in both models. It demonstrates better capability of extracting signals from different modalities. Further, MulMix and MulMix*, which is designed to combine the strengths of additive and multiplicative combination, give significant boost in accuracy on both models. HIGGS Either fusion model or additive combination gives significant error rate reduction compared\nto single modality. This is expected as it is very intuitive to aggregate low and high level feature\nmodalities. Compared to Add, Multiplicative combination has clearly better results on higgs-small but slightly\nworse results on higgs-full. This can be explained by the fact models are more prone to overfit on\nsmaller datasets and multiplicative training does reduce the overfit. Finally, MulMix and MulMix* give significant boost on both small and full datasets. Gender It gives the most dramatic improvements to combine multiple modalities here due to the\nhigh level noise in each modalities.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 40, |
| "total_chunks": 49, |
| "char_count": 1530, |
| "word_count": 228, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "259434a1-c7cc-47df-9317-5a64c720a62c", |
| "text": "Add achieves less than half error rates than what the best single 29 CIFAR100 (resnet110) 20.2 HIGGS (full) 4.05 gender (22)\nh=300\n28 20.0 h=500 4.00\nmulmix 3.95\nmulmix* 3.90 27 19.8\nrates rates rates 3.85\nerror 26 h=128 error 19.6 error 3.80 h=500\nh=256 3.75 h=300\n25 mulmix 19.4 mulmix\nmulmix* 3.70 mulmix*\n24 1 2 3 19.2 3 4 5 6 3.65 2 3 4\n# of layers # of layers # of layers\n(a) CIFAR100 (b) HIGGS (c) gender Figure 2: Comparisons to results from deeper networks. Error rates and standard deviations from\nfusion networks with hidden layer structures are reported and compared to our models (i.e., MulMix,\nand MulMix*). Simply going deep in networks does not necessarily help improve generalization. Experimental results are from 5 random runs. Table 3: Error rate results of boosted training (MulMix) on HIGGS-full and gender-22.\nβ* β=1.0 (vanilla) β=1.0 (boosted)\nHIGGS (full) 19.5 ± 0.1 19.8 ± 0.3 19.5 ± 0.1\ngender-22 3.66 ± 0.01 3.72 ± 0.02 3.67 ± 0.01", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 41, |
| "total_chunks": 49, |
| "char_count": 959, |
| "word_count": 170, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d0ab19f1-92a4-4124-a6e4-364e726a0468", |
| "text": "modality could achieve. As a comparison, Mul has similar (slightly better) results. This suggests\nthe two methods might work with similar mechanisms. However, MulMix and MulMix* clearly\noutperform Add and Mul, showing the benefits of combining two types of combination strategies.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 42, |
| "total_chunks": 49, |
| "char_count": 280, |
| "word_count": 41, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "406b8996-5700-447c-b5d0-e499c2b8d692", |
| "text": "6.2.2 Compared to deeper fusion networks As the new approach, especially MulMix (or MulMix*), introduces additional parameters in fusion\nnetworks in each mixture, one natural question is whether the improvements simply come from the\nincreased number of parameters. We answer the question by running additional experiments with\nadditive combination (Add) models with deeper fusion networks. The results are plotted in Figure 2. We see on CIFAR100 and gender, networks with increased depth\nlead to worse results. The can be either due to increased optimization difficulty or overfitting. On\nHIGGS, increased depth first leads to slight improvements and then the error rates go up again. We\nsee even the results at the optimal network depth are not as good as our approach. Overall the figures\nshow that it is the design rather than the depth of the fusion networks that holds their performance. On the contrary our approaches are explicitly designed to extract signals selectively and collectively\nacross modalities.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 43, |
| "total_chunks": 49, |
| "char_count": 1014, |
| "word_count": 158, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d0032773-a72e-4018-a208-cbe56e853ff6", |
| "text": "6.2.3 Multiplicative combination or model averaging Our loss function in (8) implements a trade-off between model averaging and multiplicative combination. β=0 makes it an model averaging of different modalities (or modality mixtures) while\nβ=1 makes the model a non-smoothed multiplicative combination. To understand the exact working\nmechanism and to achieve the best results, we tune β between 0 and 1 and plot corresponding error\nrates on different tasks in Figure 3. We observe that the optimal results do not appear at either end of the spectrum. On the contrary,\nsmoothed multiplicative combinations with optimal βs achieve significantly better results than pure\nensemble or multiplicative combination. On CIFAR100 and HIGGS, we see optimal β values 0.3\nand 0.8 respectively and they are consistent across models Mul and MulMix. On gender, Mul clearly\nfavors β to be 1 as each single modality is very noisy and it makes less sense to evenly average\npredictions from different modalities. We do not have a clear theory how to choose β automatically. Our hypothesis is that smaller β leads\nto stronger regularization (due to smoothed scaling factors) while larger β gives more flexibility in\nmodeling (highly non-linear combination). As a result, we recommend choosing smaller β when\nthe original models overfit and larger β when the models underfit.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 44, |
| "total_chunks": 49, |
| "char_count": 1355, |
| "word_count": 213, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "68fdb479-871a-4ce4-b9ac-38b19f7c0537", |
| "text": "26.5 CIFAR100 (resnet110) 20.8 HIGGS (full) 4.6 gender (22) 26.0 20.4 mulmix 4.4 mulmix rates 25.5 rates 20.220.0 rates 4.2\nerror 25.0 error 19.8 error 4.0 19.6\n24.5 mul 3.8\nmulmix 19.4\n24.0 0.1 0.3 0.5 0.8 1.0 19.2 0.1 0.3 0.5 0.8 1.0 3.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0\nbeta beta beta\n(a) CIFAR100 (b) HIGGS (c) gender Figure 3: Error rates and standard deviations under different β values. Optimal results do not appear\nat either 0 or 1. Experimental results are from 5 random runs. Table 4: Gender-22 error analysis: Mistakes that multimodal methods make where individual modals\ndo not (we call \"over-learn\"); Mul and MulMix* improve Add on that. The overall improvement is\nvery close to the improvement from \"over-learn\". Add Mul MulMix* improve\noverall 3.85 3.83 3.66 0.19\nOver-learn 2.90 2.87 2.72 0.18 6.2.4 Boosted training We also validate the effectiveness of boosted training technique. We find that when β in (8) is\nnot tuned, boosted training significantly improves the results. Table 3 shows MulMix(β = 1) test\nerrors on HIGGS and gender. Boosted training helps MulMix(β = 1.0) achieve almost identical\nresults as MulMix*.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 45, |
| "total_chunks": 49, |
| "char_count": 1149, |
| "word_count": 195, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8b80ff5c-2374-445d-9304-0a43ee097e4c", |
| "text": "It is interesting to see the second and fourth columns have very close numbers. We conjecture the smoothing effect of β makes the \"mismatch' issue discussed in Section 3.2 less\nsevere. 6.2.5 Additional Experiments Where the improvements are made? We are interested in seeing where the improvements are\nmade on this prediction task. It is known that ensemble-like methods help correct predictions on\nexamples where individual classifiers make wrong predictions. However, they also make mistakes\non the examples where individual classifiers are correct. This is in general due to overfitting and we\ncall it \"over learning.\" We expect our methods to reduce errors of \"over learning\" due to its regularization mechanism –\nwe tolerate incorrect predictions from a weak modality while preserving its correct predictions. We\nanalyze the errors only on the examples where individual modalities could make correct predictions. More clearly, we evaluate the errors on the examples which at least one single modality predicts\ncorrectly. The result is reported in Table 4. We see Mul and MulMix* both make less \"over learning\" mistakes. Interestingly, the improvement of MulMixhere (0.18) is very close to the improvement on the entire\ndataset.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 46, |
| "total_chunks": 49, |
| "char_count": 1232, |
| "word_count": 191, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "dfa017dc-ce67-4f93-96f3-2e81fa242d60", |
| "text": "It suggests our new methods do prevent individual modalities from \"over learning.\" Compared to attention models We also tried attention methods [34] where attention modules are\napplied to each modality before additively combined. We experimented on gender prediction because on this task it is most common to see missing modalities. The results are reported in Table 5. We do not observe clear improvements. Table 5: Test errors of attention models on gender tasks. Add Add-Attend\ngender-6 6.07 ± 0.02 6.07 ± 0.01\ngender-22 3.85 ± 0.03 3.86 ± 0.03 Compared to CLDL[24] on CIFAR100 Specific to image recognition domain, CLDL[24] is one\nspecialization of our \"Mul\" approach based on NIN [29]. We implemented CLDL on Resnet.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 47, |
| "total_chunks": 49, |
| "char_count": 721, |
| "word_count": 116, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b9b76557-9837-48e2-afdd-2794476aba40", |
| "text": "The\nerror rates on two models are 29.6 ± 0.5 and 25.8 ± 0.4, respectively. This paper investigates new ways to combine multimodal data that accounts for heterogeneity of\nmodality signal strength across modalities, both in general and at a per-sample level. We focus on\naddressing the challenge of \"weak modalities\": some modalities may provide better predictors on\naverage, but worse for a given instance. To exploit these facts, we propose multiplicative combination techniques to tolerate errors from the weak modalities, and help combat overfitting. We further propose multiplicative combination of modality mixtures to combine the strength of proposed\nmultiplicative combination and existing additive combination. Our experiments on three different\ndomains demonstrate consistent accuracy improvements over the state-of-the-art in those domains,\nthereby demonstrating the fact that our new framework represents a general advance that is not\nlimited to a specific domain or problem.", |
| "paper_id": "1805.11730", |
| "title": "Learn to Combine Modalities in Multimodal Deep Learning", |
| "authors": [ |
| "Kuan Liu", |
| "Yanen Li", |
| "Ning Xu", |
| "Prem Natarajan" |
| ], |
| "published_date": "2018-05-29", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1805.11730v1", |
| "chunk_index": 48, |
| "total_chunks": 49, |
| "char_count": 985, |
| "word_count": 143, |
| "chunking_strategy": "semantic" |
| } |
| ] |