researchpilot-data / chunks /1806.00064_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "21504bb1-e5bd-4434-9460-ddb655ec64e6",
"text": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors Zhun Liu∗, Ying Shen∗, Varun Bharadhwaj Lakshminarasimhan,\nPaul Pu Liang, Amir Zadeh, Louis-Philippe Morency\nSchool of Computer Science\nCarnegie Mellon University\n{zhunl,yshen2,vbl,pliang,abagherz,morency}@cs.cmu.edu Abstract as well as speaker trait analysis and media description (Park et al., 2014a) have seen a great boost\nMultimodal research is an emerging field in performance with developments in multimodal\nof artificial intelligence, and one of the research.\nmain research problems in this field is mulHowever, a core research challenge yet to be2018 timodal fusion. The fusion of multimodal solved in this domain is multimodal fusion. The\ndata is the process of integrating multiple\ngoal of fusion is to combine multiple modalities\nunimodal representations into one compact\nto leverage the complementarity of heterogeneous\nmultimodal representation. Previous re-May data and provide more robust predictions. In this\nsearch in this field has exploited the ex-\n31 pressiveness of tensors for multimodal rep- regard, an important challenge has been on scaling up fusion to multiple modalities while maintaining\nresentation. However, these methods often\nreasonable model complexity. Some of the recent\nsuffer from exponential increase in dimenattempts (Fukui et al., 2016), (Zadeh et al., 2017)\nsions and in computational complexity inat multimodal fusion investigate the use of tensors\ntroduced by transformation of input into\nfor multimodal representation and show significant[cs.AI] tensor.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 0,
"total_chunks": 14,
"char_count": 1567,
"word_count": 217,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e5c9e46d-558b-406c-9c84-58ed7cbf785a",
"text": "In this paper, we propose the Lowimprovement in performance. Unfortunately, they\nrank Multimodal Fusion method, which\nare often constrained by the exponential increase\nperforms multimodal fusion using low-rank\nof cost in computation and memory introduced by\ntensors to improve efficiency. We evaluate\nusing tensor representations. This heavily restricts\nour model on three different tasks: multhe applicability of these models, especially when\ntimodal sentiment analysis, speaker trait\nwe have more than two views of modalities in the\nanalysis, and emotion recognition. Our\ndataset.\nmodel achieves competitive results on all\nIn this paper, we propose the Low-rank Multhese tasks while drastically reducing comtimodal Fusion, a method leveraging low-rank\nputational complexity. Additional experiweight tensors to make multimodal fusion efficient\nments also show that our model can perwithout compromising on performance. The overform robustly for a wide range of low-rank\nall architecture is shown in Figure 1. We evalusettings, and is indeed much more efficient\ntimodal tasks using public datasets and compare\nother methods that utilize tensor represenits performance with state-of-the-art models. We\ntations.\nalso study how different low-rank settings impact\n1 Introduction the performance of our model and show that our\nmodel performs robustly within a wide range of\nMultimodal research has shown great progress in a rank settings. Finally, we perform an analysis of\nvariety of tasks as an emerging research field of arti- the impact of our method on the number of paramficial intelligence. Tasks such as speech recognition eters and run-time with comparison to other fusion\n(Yuhas et al., 1989), emotion recognition, (De Silva methods. Through theoretical analysis, we show\net al., 1997), (Chen et al., 1998), (W¨ollmer et al., that our model can scale linearly in the number of\n2013), sentiment analysis, (Morency et al., 2011) modalities, and our experiments also show a corre-\n∗equal contributions sponding speedup in training when compared with Visual\n!\" (\"\nMultimodal\nAudio Low-rank Representation\n!& (& Multimodal Prediction Task output\n# Fusion\n%& ℎ Language\n!' ('\n%' Low-rank factors Low-rank factors Low-rank factors + + ⋯ + 1 ∘ + + ⋯ + 1 ∘ + + ⋯ + 1\n# # #\n*\"(,) *\"(.) *\"(/) %\" *&(,) *&(.) *&(/) %& *'(,) *'(.) *'(/) %'",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 1,
"total_chunks": 14,
"char_count": 2331,
"word_count": 363,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f044f297-a406-4799-9e53-1fbf70ed97bc",
"text": "Figure 1: Overview of our Low-rank Multimodal Fusion model structure: LMF first obtains the unimodal\nrepresentation za,zv,zl by passing the unimodal inputs xa,xv,xl into three sub-embedding networks\nfv,fa,fl respectively. LMF produces the multimodal output representation by performing low-rank\nmultimodal fusion with modality-specific factors. The multimodal representation can be then used for\ngenerating prediction tasks. other tensor-based models. concatenated features as input, sometimes even reThe main contributions of our paper are as fol- moving the temporal dependency present in the\nlows: modalities (Morency et al., 2011). The drawback\nof this class of method is that although it achieves\n• We propose the Low-rank Multimodal Fusion fusion at an early stage, intra-modal interactions\nmethod for multimodal fusion that can scale are potentially suppressed, thus losing out on the\nlinearly in the number of modalities. context and temporal dependencies within each\nmodality.\n• We show that our model compares to state-of- On the other hand, late fusion builds sepathe-art models in performance on three multi- rate models for each modality and then integrates\nmodal tasks evaluated on public datasets. the outputs together using a method such as majority voting or weighted averaging (Wortwein\n• We show that our model is computationally\nand Scherer, 2017), (Nojavanasghari et al., 2016).\nefficient and has fewer parameters in compariSince separate models are built for each modality,\nson to previous tensor-based methods.\ninter-modal interactions are usually not modeled\neffectively.\n2 Related Work\nGiven these shortcomings, more recent work\nMultimodal fusion enables us to leverage comple- focuses on intermediate approaches that model\nmentary information present in multimodal data, both intra- and inter-modal dynamics. Fukui et al.\nthus discovering the dependency of information on (2016) proposes to use Compact Bilinear Pooling\nmultiple modalities. Previous studies have shown over the outer product of visual and linguistic reprethat more effective fusion methods translate to bet- sentations to exploit the interactions between vision\nter performance in models, and there's been a wide and language for visual question answering. Simrange of fusion methods. ilar to the idea of exploiting interactions, Zadeh\nEarly fusion is a technique that uses feature et al. (2017) proposes Tensor Fusion Network,\nconcatenation as the method of fusion of differ- which computes the outer product between unient views. Several works that use this method of modal representations from three different modalfusion (Poria et al., 2016) , (Wang et al., 2016) ities to compute a tensor representation. These\nuse input-level feature concatenation and use the methods exploit tensor representations to model inter-modality interactions and have shown a great H where V1,V2,...,VM are the vector spaces of\nsuccess. However, such methods suffer from expo- input modalities and H is the output vector space.\nnentially increasing computational complexity, as Given a set of vector representations, {zm}Mm=1\nthe outer product over multiple modalities results in which are encoding unimodal information of the\nextremely high dimensional tensor representations. M different modalities, the goal of multimodal\nFor unimodal data, the method of low-rank ten- fusion is to integrate the unimodal representations\nsor approximation has been used in a variety of into one compact multimodal representation for\napplications to implement more efficient tensor op- downstream tasks.\nerations. Razenshteyn et al. (2016) proposes a mod- Tensor representation is one successful approach\nified weighted version of low-rank approximation, for multimodal fusion. It first requires a transforand Koch and Lubich (2010) applies the method mation of the input representations into a hightowards temporally dependent data to obtain low- dimensional tensor and then mapping it back to a\nrank approximations.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 2,
"total_chunks": 14,
"char_count": 3978,
"word_count": 579,
"chunking_strategy": "semantic"
},
{
"chunk_id": "413f2f9a-dfef-4f60-bfa2-a76d96242068",
"text": "As for applications, Lei et al. lower-dimensional output vector space. Previous\n(2014) proposes a low-rank tensor technique for works have shown that this method is more effecdependency parsing while Wang and Ahuja (2008) tive than simple concatenation or pooling in terms\nuses the method of low-rank approximation applied of capturing multimodal interactions (Zadeh et al.,\ndirectly on multidimensional image data (Datum- 2017), (Fukui et al., 2016). Tensors are usually\nas-is representation) to enhance computer vision created by taking the outer product over the input\napplications.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 3,
"total_chunks": 14,
"char_count": 585,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5a726e6d-a70d-47dc-8e52-7283a2ee4516",
"text": "Hu et al. (2017) proposes a low-rank modalities. In addition, in order to be able to model\ntensor-based fusion framework to improve the face the interactions between any subset of modalities\nrecognition performance using the fusion of facial using one tensor, Zadeh et al. (2017) proposed a\nattribute information. However, none of these previ- simple extension to append 1s to the unimodal repous work aims to apply low-rank tensor techniques resentations before taking the outer product. The\nfor multimodal fusion. input tensor Z formed by the unimodal representaOur Low-rank Multimodal Fusion method pro- tion is computed by:\nvides a much more efficient method to com- M\npute tensor-based multimodal representations with Z = zm,zm ∈Rdm (1) m=1⊗\nmuch fewer parameters and computational comM\nplexity. The efficiency and performance of our ap- where m=1 denotes the tensor outer product over ⊗\nproach are evaluated on different downstream tasks, a set of vectors indexed by m, and zm is the input\nnamely sentiment analysis, speaker-trait recogni- representation with appended 1s.\ntion and emotion recognition. The input tensor Z ∈Rd1×d2×...dM is then passed\nthrough a linear layer g(⋅) to to produce a vector\n3 Low-rank Multimodal Fusion representation:\nIn this section, we start by formulating the problem h = g(Z;W,b) = W ⋅Z + b, h,b ∈Rdy (2)\nof multimodal fusion and introducing fusion meth- where W is the weight of this layer and b is the\nods based on tensor representations. With Z being an order-M tensor (where\npowerful in their expressiveness but do not scale M is the number of input modalities), the weight\nwell to a large number of modalities. Our proposed W will naturally be a tensor of order-(M + 1) in\nmodel decomposes the weights into low-rank fac- Rd1×d2×...×dM×dh. The extra (M +1)-th dimension\ntors, which reduces the number of parameters in corresponds to the size of the output representation\nthe model. This decomposition can be performed dh. In the tensor dot product W ⋅Z, the weight tenefficiently by exploiting the parallel decomposition sor W can be then viewed as dh order-M tensors.\nof low-rank weight tensor and input tensor to com- In other words, the weight W can be partitioned\npute tensor-based fusion. Our method is able to into Wk̃ ∈Rd1×...×dM , k = 1,...,dh. Each Wk̃ conscale linearly with the number of modalities. tributes to one dimension in the output vector h, i.e.\nhk = Wk̃ ⋅Z. This interpretation of tensor fusion is\n3.1 Multimodal Fusion using Tensor illustrated in Figure 2 for the bi-modal case. Representations One of the main drawbacks of tensor fusion\nIn this paper, we formulate multimodal fusion as is that we have to explicitly create the higha multilinear function f ∶V1 × V2 × ... × VM → dimensional tensor Z.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 4,
"total_chunks": 14,
"char_count": 2765,
"word_count": 460,
"chunking_strategy": "semantic"
},
{
"chunk_id": "22375882-7ac3-46ff-be61-2612b5f810e4",
"text": "The dimensionality of Z R decomposi- {{w(i)m,k}Mm=1}Ri=1 are called the rank\n!# tion factors of the original tensor. E In LMF, we start with a fixed rank r, and pa-\n⨂ R M ? = ℎ rameterize the model with r decomposition factors\nthat can be used to !> {{w(i)m,k}Mm=1}ri=1,k = 1,...,dh\nE reconstruct a low-rank version of these Wk.̃\nWe can regroup and concatenate these vectors\nFigure 2: Tensor fusion via tensor outer product into M modality-specific low-rank factors. Let\nthen for modality w(i)m = [w(i)m,1,w(i)m,2,...,w(i)m,dh],will increase exponentially with the number of\n{w(i)m }ri=1 is its corresponding low-rank factors.modalities as ∏Mm=1 dm. The number of parameters m,\nto learn in the weight tensor W will also increase And we can recover a low-rank weight tensor by:\nexponentially. This not only introduces a lot of r M\nm (4)computation but also exposes the model to risks of W = ∑ m=1⊗ w(i) i=1overfitting. Hence equation 2 can be computed by\n3.2 Low-rank Multimodal Fusion with\nModality-Specific Factors r M w(i)m ) ⋅Z (5) h = ( ∑ ⊗As a solution to the problems of tensor-based fu- i=1 m=1\nsion, we propose Low-rank Multimodal Fusion Note that for all m, w(i)m ∈Rdm×dh shares the(LMF). LMF parameterizes g(⋅) from Equation same size for the second dimension. We define\n2 with a set of modality-specific low-rank factors\ntheir outer product to be over only the dimensions\nthat can be used to recover a low-rank weight tenthat are not shared: w(i)m ⊗w(i)n ∈Rdm×dn×dh. Asor, in contrast to the full tensor we W. Moreover,\nbimodal example of this procedure is illustrated inshow that by decomposing the weight into a set\nFigure 3.of low-rank factors, we can exploit the fact that\nNevertheless, by introducing the low-rank fac-the tensor Z actually decomposes into {zm}Mm=1,\ntors, we now have to compute the reconstructionwhich allows us to directly compute the output h M w(i)m for the forward computa- m=1without explicitly tensorizing the unimodal repre- of W = ∑ri=1⊗\ntion. Yet this introduces even more computation.sentations.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 5,
"total_chunks": 14,
"char_count": 2038,
"word_count": 336,
"chunking_strategy": "semantic"
},
{
"chunk_id": "51832be0-434a-4234-9e6d-eaf4c4b93f05",
"text": "LMF reduces the number of parameters\nas well as the computation complexity involved 3.2.2 Efficient Low-rank Fusion Exploiting\nin tensorization from being exponential in M to Parallel Decomposition\nlinear. In this section, we will introduce an efficient pro-\n3.2.1 Low-rank Weight Decomposition cedure for computing h, exploiting the fact that\ntensor Z naturally decomposes into the originalThe idea of LMF is to decompose the weight tensor\nW into M sets of modality-specific factors. How- input {zm}Mm=1, which is parallel to the modalityspecific low-rank factors. In fact, that is the mainever, since W itself is an order-(M + 1) tensor,\nreason why we want to decompose the weight ten-commonly used methods for decomposition will\nsor into M modality-specific factors.result in M + 1 parts. Hence, we still adopt the M\nm=1 zm, we can sim-view introduced in Section 3.1 that W is formed by Using the fact that Z = ⊗\nplify equation 5:dh order-M tensors Wk̃ ∈Rd1×...×dM ,k = 1,...,dh\nstacked together. We can then decompose each Wk̃ r M\nm ) ⋅Zseparately. h = ( ∑ m=1⊗ w(i) i=1 For an order-M tensor Wk̃ ∈Rd1×...×dM, there\nalways exists an exact decomposition into vectors\n( M w(i)m ⋅Z) = ∑ ⊗in the form of: i=1 m=1\nr M R\nM w(i) w(i) ( M w(i)m ⋅ zm) ∑ Wk̃ = m,k, m,k ∈Rdm (3) = ∑ m=1⊗ m=1⊗ i=1 m=1⊗ i=1\nM r\nw(i)m ⋅zm] (6)The minimal R that makes the decomposition valid = Λ [ ∑\nis called the rank of the tensor. The vector sets m=1 i=1 !# 5#(S) 5#(T)\n⨂ g V + + ⋯ = ℎ\n!> (S) (T)\n⨂ 5> ⨂ 5> E",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 6,
"total_chunks": 14,
"char_count": 1486,
"word_count": 278,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cf1b3177-46e6-4684-a243-d659ded4d0d6",
"text": "Figure 3: Decomposing weight tensor into low-rank factors (See Section 3.2.1 for details.) where Λ Mm=1 denotes the element-wise product factors into M order-3 tensors and swap the orover a sequence of tensors: Λ 3t=1 xt = x1 ○x2 ○x3. der in which we do the element-wise product and\nAn illustration of the trimodal case of equation 6 summation:\nis shown in Figure 1. We can also derive equation r M\n(8)6 for a bimodal case to clarify what it does: h = ∑ [ Λ [w(1)m ,w(2)m ,...,w(r)m ] ⋅ˆzm]\ni=1 m=1 i,∶ and now the summation is done along the first di- r\nmension of the bracketed matrix. [⋅]i,∶indicates the w(i)a ⊗w(i)v ) ⋅Z h = ( ∑\ni=1 i-th slice of a matrix. In this way, we can parameterize the model with M order-3 tensors, instead of\n(7) = ( ∑ r w(i)a ⋅za) ○( ∑r w(i)v ⋅zv) parameterizing with sets of vectors. i=1 i=1 4 Experimental Methodology An important aspect of this simplification is that\nit exploits the parallel decomposition of both Z We compare LMF with previous state-of-the-art\nand W, so that we can compute h without actually baselines, and we use the Tensor Fusion Networks\ncreating the tensor Z from the input representa- (TFN) (Zadeh et al., 2017) as a baseline for tensortions zm. In addition, different modalities are de- based approaches, which has the most similar struccoupled in the simplified computation of h, which ture with us except that it explicitly forms the large\nallows for easy generalization of our approach to multi-dimensional tensor for fusion across different\nan arbitrary number of modalities. Adding a new modalities.\nmodality can be simply done by adding another We design our experiments to better understand\nset of modality-specific factors and extend Equa- the characteristics of LMF. Our goal is to answer\ntion 7.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 7,
"total_chunks": 14,
"char_count": 1766,
"word_count": 307,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6ce35ea7-409a-4483-a271-9d0c0244cdb5",
"text": "Last but not least, Equation 6 consists of the following four research questions:\nfully differentiable operations, which enables the (1) Impact of Multimodal Low-rank Fusion: Diparameters {w(i)m }ri=1 m = 1,...,M to be learned rect comparison between our proposed LMF model\nend-to-end via back-propagation. and the previous TFN model. Using Equation 6, we can compute h directly (2) Comparison with the State-of-the-art: We\nfrom input unimodal representations and their evaluate the performance of LMF and state-of-themodal-specific decomposition factors, avoiding the art baselines on three different tasks and datasets.\nweight-lifting of computing the large input ten- (3) Complexity Analysis: We study the modal\nsor Z and W, as well as the r linear transfor- complexity of LMF and compare it with the TFN\nmation. Instead, the input tensor and subsequent model.\nlinear projection are computed implicitly together (4) Rank Settings: We explore performance of\nin Equation 6, and this is far more efficient than LMF with different rank settings.\nthe original method described in Section 3.1. In- The results of these experiments are presented\ndeed, LMF reduces the computation complexity of in Section 5.\ntensorization and fusion from O(dy ∏Mm=1 dm) to\nO(dy × r × ∑Mm=1 dm). 4.1 Datasets\nIn practice, we use a slightly different form of We perform our experiments on the following multiEquation 6, where we concatenate the low-rank modal datasets, CMU-MOSI (Zadeh et al., 2016a), Dataset CMU-MOSI IEMOCAP POM Visual The library Facet1 is used to extract a set of\nLevel Segment Segment Video\n# Train 1284 6373 600 visual features for each frame (sampled at 30Hz) in-\n# Valid 229 1775 100 cluding 20 facial action units, 68 facial landmarks,\n# Test 686 1807 203 head pose, gaze tracking and HOG features (Zhu\nTable 1: The speaker independent data splits for et al., 2006).\ntraining, validation, and test sets. Acoustic We use COVAREP acoustic analysis\nframework (Degottex et al., 2014) to extract a set\nof low-level acoustic features, including 12 Mel\nPOM (Park et al., 2014b), and IEMOCAP (Busso frequency cepstral coefficients (MFCCs), pitch,\net al., 2008) for sentiment analysis, speaker traits voiced/unvoiced segmentation, glottal source, peak\nrecognition, and emotion recognition task, where slope, and maxima dispersion quotient features.\nthe goal is to identify speakers emotions based on\nthe speakers' verbal and nonverbal behaviors. 4.3 Model Architecture\nCMU-MOSI The CMU-MOSI dataset is a collec- In order to compare our fusion method with prevition of 93 opinion videos from YouTube movie ous work, we adopt a simple and straightforward\nreviews. Each video consists of multiple opinion 2 model architecture for extracting unimodal repsegments and each segment is annotated with the resentations. Since we have three modalities for\nsentiment in the range [-3,3], where -3 indicates each dataset, we simply designed three unimodal\nhighly negative and 3 indicates highly positive. sub-embedding networks, denoted as fa,fv,fl, to\nPOM The POM dataset is composed of 903 movie extract unimodal representations za,zv,zl from unireview videos. Each video is annotated with the fol- modal input features xa,xv,xl. For acoustic and\nlowing speaker traits: confident, passionate, voice visual modality, the sub-embedding network is a\npleasant, dominant, credible, vivid, expertise, enter- simple 2-layer feed-forward neural network, and\ntaining, reserved, trusting, relaxed, outgoing, thor- for language modality, we used an LSTM (Hochreough, nervous, persuasive and humorous. iter and Schmidhuber, 1997) to extract represenIEMOCAP The IEMOCAP dataset is a collection tations.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 8,
"total_chunks": 14,
"char_count": 3674,
"word_count": 556,
"chunking_strategy": "semantic"
},
{
"chunk_id": "03e42924-b56b-425f-9b12-46a9e1984141",
"text": "The model architecture is illustrated in\nof 151 videos of recorded dialogues, with 2 speak- Figure 1.\ners per session for a total of 302 videos across the\ndataset. Each segment is annotated for the presence 4.4 Baseline Models\nof 9 emotions (angry, excited, fear, sad, surprised, We compare the performance of LMF to the followfrustrated, happy, disappointed and neutral). ing baselines and state-of-the-art models in multiTo evaluate model generalization, all datasets are modal sentiment analysis, speaker trait recognition,\nsplit into training, validation, and test sets such that and emotion recognition.\nthe splits are speaker independent, i.e., no identical Support Vector Machines Support Vector Maspeakers from the training set are present in the chines (SVM) (Cortes and Vapnik, 1995) is a\ntest sets. Table 1 illustrates the data splits for all widely used non-neural classifier.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 9,
"total_chunks": 14,
"char_count": 888,
"word_count": 135,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2d504b11-a84b-45af-8d81-9ea497af4c06",
"text": "This baseline\ndatasets in detail. is trained on the concatenated multimodal features\nfor classification or regression task (P´erez-Rosas\n4.2 Features et al., 2013), (Park et al., 2014a), (Zadeh et al.,\nEach dataset consists of three modalities, namely 2016b).\nlanguage, visual, and acoustic modalities. To reach Deep Fusion The Deep Fusion model (DF) (Nothe same time alignment across modalities, we javanasghari et al., 2016) trains one deep neural\nperform word alignment using P2FA (Yuan and model for each modality and then combine the outLiberman, 2008) which allows us to align the three put of each modality network with a joint neural\nmodalities at the word granularity. We calculate the network.\nvisual and acoustic features by taking the average Tensor Fusion Network The Tensor Fusion Netof their feature values over the word time interval work (TFN) (Zadeh et al., 2017) explicitly models\n(Chen et al., 2017). view-specific and cross-view dynamics by creatLanguage We use pre-trained 300-dimensional ing a multi-dimensional tensor that captures uniGlove word embeddings (Pennington et al., 2014)\n1goo.gl/1rh1JN\nto encode a sequence of transcribed words into a 2The source code of our model is available on Github at\nsequence of word vectors. https://github.com/Justin1904/Low-rank-Multimodal-Fusion modal, bimodal and trimodal interactions across parison reported in the last two rows of Table 2\nthree modalities. demonstrates that our model significantly outperMemory Fusion Network The Memory Fusion forms TFN across all datasets and metrics. This\nNetwork (MFN) (Zadeh et al., 2018a) accounts for competitive performance of LMF compared to TFN\nview-specific and cross-view interactions and con- emphasizes the advantage of Low-rank Multimodal\ntinuously models them through time with a special Fusion.\nattention mechanism and summarized through time\nwith a Multi-view Gated Memory. 5.2 Comparison with the State-of-the-art\nBidirectional Contextual LSTM The Bidirec- We compare our model with the baselines and statetional Contextual LSTM (BC-LSTM) (Zadeh et al., of-the-art models for sentiment analysis, speaker\n2017), (Fukui et al., 2016) performs context- traits recognition and emotion recognition. Results\ndependent fusion of multimodal data. are shown in Table 2. LMF is able to achieve comMulti-View LSTM The Multi-View LSTM (MV- petitive and consistent results across all datasets. LSTM) (Rajagopalan et al., 2016) aims to capture On the multimodal sentiment regression task,\nboth modality-specific and cross-modality interac- LMF outperforms the previous state-of-the-art\ntions from multiple modalities by partitioning the model on MAE and Corr. Note the multiclass\nmemory cell and the gates corresponding to multi- accuracy is calculated by mapping the range of\nple modalities. continuous sentiment values into a set of intervals\nMulti-attention Recurrent Network The Multi- that are used as discrete classes.\nattention Recurrent Network (MARN) (Zadeh et al., On the multimodal speaker traits Recognition\n2018b) explicitly models interactions between task, we report the average evaluation score over\nmodalities through time using a neural component 16 speaker traits and shows that our model achieves\ncalled the Multi-attention Block (MAB) and storing the state-of-the-art performance over all three evalthem in the hybrid memory called the Long-short uation metrics on the POM dataset. Term Hybrid Memory (LSTHM).",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 10,
"total_chunks": 14,
"char_count": 3437,
"word_count": 498,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fb0be3e8-4cef-4d24-a29d-e973d9b40d89",
"text": "On the multimodal emotion recognition task, our\nmodel achieves better results compared to the state-4.5 Evaluation Metrics\nof-the-art models across all emotions on the F1\nMultiple evaluation tasks are performed during our score. F1-emotion in the evaluation metrics indievaluation: multi-class classification and regres- cates the F1 score for a certain emotion class.\nsion. The multi-class classification task is applied\nto all three multimodal datasets, and the regres- 5.3 Complexity Analysis\nsion task is applied to the CMU-MOSI and the\nTheoretically, the model complexity of our fuPOM dataset. For binary classification and multision method is O(dy × r × ∑Mm=1 dm) comparedclass classification, we report F1 score and accuto O(dy ∏Mm=1 dm) of TFN from Section 3.1. Inracy Acc−k where k denotes the number of classes. practice, we calculate the total number of parameSpecifically, Acc−2 stands for the binary classifica- ters used in each model, where we choose M = 3,tion. For regression, we report Mean Absolute Erd1 = 32, d2 = 32, d3 = 64, r = 4, dy = 1. Underror (MAE) and Pearson correlation (Corr).",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 11,
"total_chunks": 14,
"char_count": 1108,
"word_count": 177,
"chunking_strategy": "semantic"
},
{
"chunk_id": "057faae5-5c07-4c2f-8d74-d8dc3383576e",
"text": "Higher\nthis hyper-parameter setting, our model contains\nvalues denote better performance for all metrics\nabout 1.1e6 parameters while TFN contains about\nexcept for MAE.\n12.5e6 parameters, which is nearly 11 times more. Note that, the number of parameters above counts5 Results and Discussion\nnot only the parameters in the multimodal fusion\nIn this section, we present and discuss the results stage but also the parameters in the subnetworks.\nfrom the experiments designed to study the re- Furthermore, we evaluate the computational\nsearch questions introduced in section 4. complexity of LMF by measuring the training and\ntesting speeds between LMF and TFN. Table 3\n5.1 Impact of Low-rank Multimodal Fusion illustrates the impact of Low-rank Multimodal FuIn this experiment, we compare our model directly sion on the training and testing speeds compared\nwith the TFN model since it has the most similar with TFN model. Here we set rank to be 4 since\nstructure to our model, except that TFN explic- it can generally achieve fairly competent perforitly forms the multimodal tensor fusion. Dataset CMU-MOSI POM IEMOCAP\nMetric MAE Corr Acc-2 F1 Acc-7 MAE Corr Acc F1-Happy F1-Sad F1-Angry F1-Neutral\nSVM 1.864 0.057 50.2 50.1 17.5 0.887 0.104 33.9 81.5 78.8 82.4 64.9\nDF 1.143 0.518 72.3 72.1 26.8 0.869 0.144 34.1 81.0 81.2 65.4 44.0\nBC-LSTM 1.079 0.581 73.9 73.9 28.7 0.840 0.278 34.8 81.7 81.7 84.2 64.1\nMV-LSTM 1.019 0.601 73.9 74.0 33.2 0.891 0.270 34.6 81.3 74.0 84.3 66.7\nMARN 0.968 0.625 77.1 77.0 34.7 - - 39.4 83.6 81.2 84.2 65.9\nMFN 0.965 0.632 77.4 77.3 34.1 0.805 0.349 41.7 84.0 82.1 83.7 69.2\nTFN 0.970 0.633 73.9 73.4 32.1 0.886 0.093 31.6 83.6 82.8 84.2 65.4\nLMF 0.912 0.668 76.4 75.7 32.8 0.796 0.396 42.8 85.8 85.9 89.0 71.7 Table 2: Results for sentiment analysis on CMU-MOSI, emotion recognition on IEMOCAP and personality\ntrait recognition on POM. Best results are highlighted in bold. Model Training Speed (IPS) Testing Speed (IPS) the number of rank. The results are presented in\nTFN 340.74 1177.17\nLMF 1134.82 2249.90 Figure 4. We observed that as the rank increases,\nthe training results become more and more unstable\nTable 3: Comparison of the training and testing\nand that using a very low rank is enough to achieve\nspeeds between TFN and LMF. The second and\nfairly competent performance.\nthe third columns indicate the number of data point\ninferences per second (IPS) during training and 6 Conclusion\ntesting time respectively.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 12,
"total_chunks": 14,
"char_count": 2453,
"word_count": 407,
"chunking_strategy": "semantic"
},
{
"chunk_id": "78c57aa4-098d-42da-95cb-0f440b7715fe",
"text": "Both models are implemented in the same framework with equivalent run- In this paper, we introduce a Low-rank Multining environment. modal Fusion method that performs multimodal fusion with modality-specific low-rank factors. LMF\nscales linearly in the number of modalities. LMF\nBased on these results, performing a low-rank achieves competitive results across different mulmultimodal fusion with modality-specific low-rank timodal tasks. Furthermore, LMF demonstrates\nfactors significantly reduces the amount of time a significant decrease in computational complexneeded for training and testing the model. On an ity from exponential to linear time. In practice,\nNVIDIA Quadro K4200 GPU, LMF trains with LMF effectively improves the training and testing\nan average frequency of 1134.82 IPS (data point efficiency compared to TFN which performs multiinferences per second) while the TFN model trains modal fusion with tensor representations.\nat an average of 340.74 IPS. Future work on similar topics could explore the\napplications of using low-rank tensors for attention5.4 Rank Settings\nmodels over tensor representations, as they can be\nTo evaluate the impact of different rank settings for even more memory and computationally intensive.\nour LMF model, we measure the change in performance on the CMU-MOSI dataset while varying Acknowledgements This material is based upon work partially supported by the National Science Foundation (Award\n# 1833355) and Oculus VR. Any opinions, findings,\nand conclusions or recommendations expressed in\nthis material are those of the author(s) and do not\nnecessarily reflect the views of National Science\nFoundation or Oculus VR, and no official endorsement should be inferred.",
"paper_id": "1806.00064",
"title": "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors",
"authors": [
"Zhun Liu",
"Ying Shen",
"Varun Bharadhwaj Lakshminarasimhan",
"Paul Pu Liang",
"Amir Zadeh",
"Louis-Philippe Morency"
],
"published_date": "2018-05-31",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1806.00064v1",
"chunk_index": 13,
"total_chunks": 14,
"char_count": 1716,
"word_count": 250,
"chunking_strategy": "semantic"
}
]