researchpilot-data / chunks /1805.00705_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "1cf7589a-4936-4998-be8b-af59f8f058fa",
"text": "Onno Kampman, Elham J. Barezi, Dario Bertero, Pascale Fung\nCenter for AI Research (CAiRE)\nHong Kong University of Science and Technology, Clear Water Bay, Hong Kong\nejs,dbertero@connect.ust.hk, pascale@ece.ust.hk Abstract each subject filling in a specific questionnaire. The\nbiggest effort to predict personality, as well as\nWe propose a tri-modal architecture to pre- to release a benchmark open-domain personality\ndict Big Five personality trait scores from corpus, was given by the ChaLearn 2016 shared\nvideo clips with different channels for au- task challenge (Ponce-L´opez et al., 2016). All2018 dio, text, and video data. For each channel, the best performing teams used neural network\nstacked Convolutional Neural Networks techniques. They extracted traditional audio feaare employed.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 1,
"total_chunks": 21,
"char_count": 793,
"word_count": 113,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9a2ce287-6977-4e0d-9b74-1b29837064f7",
"text": "The channels are fused both tures (zero crossing rate, energy, spectral features,May on decision-level and by concatenating MFCCs) and then fed them into the neural nettheir respective fully connected layers. It is\n16 shown that a multimodal fusion approach work (Subramaniam et al., 2016; G¨urpınar et al., 2016; Zhang et al., 2016). A deep neural network\noutperforms each single modality channel, should however be able to extract such features\nwith an improvement of 9.4% over the best itself. G¨uc¸l¨ut¨urk et al. (2016) took a different apindividual modality (video). Full backprop- proach by feeding the raw audio and video samples\nagation is also shown to be better than a to the network. However they mostly designed the[cs.AI] linear combination of modalities, meaning network for computer vision, and used the same\ncomplex interactions between modalities architecture to audio input without any adaptation\ncan be leveraged to build better models. or considerations to merge modalities. The chalFurthermore, we can see the prediction rel- lenge was in general aimed at the computer vision\nevance of each modality for each trait. The community (many only used facial features), thus\ndescribed model can be used to increase the not many looked into what their deep network was\nemotional intelligence of virtual agents. learning regarding other modalities. In this paper, we are interested in predicting per- 1 Introduction\nsonality from speech, language and video frames\nAutomatic prediction of personality is important (facial features). We first consider the different\nfor the development of emotional and empathetic modalities separately, in order to have more unvirtual agents. Humans are very quick to assign derstanding of how personality is expressed and\ncharacters (Nass et al., 1995). People infer per- ity definition.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 2,
"total_chunks": 21,
"char_count": 1834,
"word_count": 280,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d95c8176-23de-437c-a412-63c5a517f361",
"text": "Then we design and analyze fusion\nsonality from different cues, both behavioral and methods to effectively combine the three modaliverbal, hence a model to predict personality should ties in order to obtain a performance improvement\ntake into account multiple modalities including lan- over each individual modality. The readers can reguage, speech and visual cues. fer to the survey by Baltruˇsaitis et al. (2018) for\nPersonality is typically modeled with the Big more information on multi-modal approaches. Five personality descriptors (Goldberg, 1990). In\nthis model an individual's personality is defined as 2 Methodology\na collection of five scores in range [0, 1] for personality traits Extraversion, Agreeableness, Con- Our multimodal deep neural network architecture\nscientiousness, Neuroticism and Openness to Ex- consists of three separate channels for audio, text,\nperience. These score are usually calculated by and video. The channels are fused both in decisionlevel fusion and inside the neural network. The by using a professional human transcription serthree channels are trained in a multiclass way, in- vice 1 to ensure maximum quality for the ground\nstead of separate models for each trait (Farnadi truth annotations. For this channel we extract\net al., 2016). The full architecture is trained end- word2vec word embeddings from transcriptions\nto-end, which refers to models that are completely and feed those into a CNN. The embeddings have\ntrained from the most raw input representation to a dimensionality of k = 300 and were pre-trained\nthe desired output, with all the parameters learned on Google News data (around 300 billion words).\nfrom data (Muller et al., 2006). Automatic fea- This enables us to take into account much more\nture learning has the capacity to outperform feature contextual information than available in just the\nengineering, as these learned features are automati- corpus at hand.\ncally tuned to the task at hand. The full neural net- Transcriptions per sample were split up into difwork architecture with the three channels is shown ferent sentences. We normalized the text in order\nin Fig. 1. to align our corpus with the embedding dictionary. We fed the computed matrix into a CNN, whose\n2.1 Audio channel\narchitecture is inspired by Kim (2014). Three conThe audio channel looks at acoustic and prosodic volutional windows of size three, four, and five\n(i.e. non-verbal) information of speech.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 3,
"total_chunks": 21,
"char_count": 2442,
"word_count": 380,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3d021e2f-463f-4309-b1f8-4d359d589463",
"text": "It takes words are slid over the sentence, taking steps of\nraw waveforms as input instead of commonly used one word each time. These windows are expected\nspectrograms or traditional feature sets, as CNNs to extract compact n-grams from sentences. After\nare able to autonomously extract relevant features this layer, a max-pooling is taken for the outcome\ndirectly from audio (Bertero et al., 2016). of each of the kernels separately to get a final senInput audio samples are first downscaled to a tence encoding. The representation is then mapped\nuniform sampling rate of 8 kHz before fed to the through a fully connected layer with sigmoid actimodel. During each training iteration (but not dur- vation, to the final Big Five personality traits.\ning evaluation), for each input audio sample we\nrandomize the amplitude (volume) through an ex- 2.3 Video channel\nponential random coefficient α = 10U(−1.5,1.5), In the video channel, we first take a random frame\nwhere U(−1.5, 1.5) is a uniform random variable, from each of the videos, which leads to personalto avoid any bias related to recording volumes. ity recognition from only appearance. We did not\nWe split the input signal into two feature chan- use Long Short Term Memory (LSTM) networks\nnels as input for the CNN: the raw waveform as-is, because we only need appearance information, not\nand the signal with squared amplitude (aimed at temporal and movement information. Although\ncapturing the energy component of the signal).",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 4,
"total_chunks": 21,
"char_count": 1484,
"word_count": 241,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ebe7def8-4ad9-453b-906f-c78cea87239f",
"text": "A many works in the ChaLearn competition align\nstack of four convolutional layers is applied to the faces manually using the famous Viola-Jones algoinput to perform feature extraction from short over- rithm (Viola and Jones, 2004) and crop them from\nlapping windows, analyze variations over neigh- frames (G¨urpınar et al., 2016), here we choose not\nboring regions of different sizes, and combine all to in order to prevent excessive preprocessing. It\ncontributions throughout the sample. has also been found and shown that deep archiWe used global average pooling operation over tectures can automatically learn to focus on the\nthe output of each layer to capture globally ex- face (Gucluturk et al., 2017).\npressed personality characteristics over the entire We extract representations from the images usaudio frame and to combine the contributions of ing the VGG-face CNN model (Parkhi et al., 2015),\nthe convolutional layer outputs. After obtaining the with pre-trained VGG-16 weights (Simonyan and\noverall vector by weighted-average of each convo- Zisserman, 2014). Input images are fed into the\nlutional layer output, it is fed to a fully connected model with their three channels (blue, red, and\nlayer with final sigmoid layer to perform the final green). Several convolutional layers combined\nregression operation to map this representation to a with max-pooling and padding layers follow. We\nscore in range [0, 1] for each of the five traits. use two final fully connected layers, followed by\nsigmoid activation to map to the five traits. We2.2 Text channel\nonly train these two final layers. Fine-tuning preThe transcriptions for the ChaLearn dataset, provided by the challenge organizers, were obtained 1http://www.rev.com Input Raw waveform Audio\nLx2\n! #x512 !\"\" !\n$\"\" #x512 ! #x512 %\"\" ! #x512 1x64 1x64 64 &\"\" Global average pool over temporal\naxis of outputs from 4 conv layers\nFully connected layer + ReLU Input sentence embedding Text\nLx300 1x640 1x640 1x5 Lx64\n1x64 1x192 1x64 64 Lx64\nt 1x64 Convolution + ReLU Lx64 Max pool over temporal axis Fully connected layer + ReLU",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 5,
"total_chunks": 21,
"char_count": 2090,
"word_count": 334,
"chunking_strategy": "semantic"
},
{
"chunk_id": "40227de7-469f-4ded-993d-605f9d4740cb",
"text": "Input 224x224x3 Video\nVideo frame 224x224x64 56x56x256\n28x28x512 512 14x14x512 7x7x512 Convolution + ReLU Concatenation of channel outputs\nMax pool\nFully connected layer + ReLU\nFully connected layer + ReLU\nFully connected layer +\nsigmoid activation to Big Five Figure 1: Diagram of the tri-modal architecture for prediction of Big Five traits from audio, text and\nvideo input. Concatenating output of the three individual modalities results in an output layer with size of\n64+64+512=640. trained models as such is a common way to lever- proach, done through an ensemble (voting) method.\nage on training on large outside datasets and thus The three channels are copies of the fully trained\nthe model doesn't need to learn extracting visual models described above. We want to know the\nfeatures itself (Esteva et al., 2017). linear combination weights for each modality, for\neach trait (15 weights in total). The final pre-\n2.4 Multimodal fusion dictions from our tri-modal model then become\nˆpi = PNj=1 wi,j ˆpi,j, where pi represents the enHumans infer personality from different cues and semble estimator for the score of trait i, j reprethis motivates us to predict personality by using sents the modality (with N = 3 the number of\nmultiple modalities simultaneously. We look at\nmodalities), and wi,j and ˆpi,j the weights and estithree different fusing methods to find how to com- mates respectively for trait i for modality j. The\nbine modalities best. weights were found by minimizing the Mean AbThe first method is a decision-level fusion apsolute Error (MAE) on the development set. We We trained our model using Adam optimizer\nchoose to have weights per trait because the uni- (Kingma and Ba, 2014). All models were implemodal results show that some modalities are better mented with Keras (Chollet et al., 2015). We train\nat predicting some traits than others. An important all models parameters with a regression cost funcadvantage of using this fusion method is that we tion by iteratively minimizing the average over five\ncan read the relevance of the modalities for each of traits of the Mean Square Error (MSE) between\nthe traits , from the weights. model predictions and ground truth labels. We also\nIn the other fusion methods we propose, we first use the MSE with the ground truth to evaluate the\nmerge the modalities by truncating the final fully performance over the test set.\nconnected layers from each of the channels.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 6,
"total_chunks": 21,
"char_count": 2438,
"word_count": 400,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1004258c-716a-4748-be24-674b8446f2c4",
"text": "We\n3.3 Results\nthen concatenate the previous fully connected layers, to obtain shared representations of the input Our aim is to investigate the contribution of differdata. Finally we add two extra fully connected lay- ent modalities for personality detection task. For the second method, all layers in the ble 1 includes the optimal weights learned for the\nseparate channels are frozen, so we basically want decision-level fusion approach. From this table we\nthe model to learn what combination (could still be can read contribution of each of the modalities to\na linear one) of channel outputs is optimal. For the the prediction of each trait.\nthird method, we again train this exact architecture,\nBig Five Personality Traits\nbut now we also update the parameters of both the\naudio and text channels, thus enabling fully end- Model E A C N O\nto-end backpropagation. This enables the model Audio 0.44 0.32 0.27 0.45 0.54\nText -0.03 0.22 0.13 0.03 -0.06\nto learn more complex interaction between the dif- Video 0.59 0.46 0.60 0.52 0.52\nferent channels. The architecture is illustrated in\nFig. 1.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 7,
"total_chunks": 21,
"char_count": 1095,
"word_count": 182,
"chunking_strategy": "semantic"
},
{
"chunk_id": "01e92661-f59b-41b8-bf87-50b76451ab29",
"text": "The layers in the dashed boxes are frozen Table 1: Optimal weights learned for combining\n(non-trainable) for limited backpropation model, the three modalities for each trait. E, A, C, N, and\nand trainable for full backpropagation model. O stand for Extraversion, Agreeableness, Conscientiousness, Neuroticism and Openness, respectively.\n3 Experiments\nTable 2 displays the tri-modal regression (MAE)\n3.1 Corpus\nperformance, individual modalities, multimodal\nWe used the ChaLearn First Impressions Dataset, decision-level fusion and the two neural network\nwhich consists of YouTube vlogs clips of around 15 fusion methods. We also report the average trait\nseconds (Ponce-L´opez et al., 2016). The speaker in scores from the training set labels as a baseline\neach video is annotated with Big Five personality (Mairesse et al., 2007). The neural network fusion\nscores. The ChaLearn dataset was divided into a with full backpropagation works best with an avertraining set of 6,000 clips and 20% of the training age MSE score of 0.0938, around 2.9% improveset was taken as validation set during training to ment over the last-layer backpropagation only, and\ntune the hyperparameter, the early stopping condi- 9.4% over the best separate modality (video). Both\ntions and the ensemble method training. We used in separate and ensemble methods, the results we\npre-defined ChaLearn Validation Set of 2,000 clips obtain are lower than just estimating the average\nas the test set. from the training set. The main target in this work is to investigate\n3.2 Setup the effect of audio, visual, and text modalities, and\nFor the audio CNN, we used a window size of 200 different fusion methods in personality recognition,\n(i.e. 25 ms) for the first layer, and a stride of 100 rather than proposing the method with the best ac-\n(i.e. 12.5 ms). In the following convolutional layers curacy. However, we still repeat the accuracy of the\nwe set the window size and stride is set to 8 and 2 reported methods in Table 2 and two winners of the\nrespectively.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 8,
"total_chunks": 21,
"char_count": 2032,
"word_count": 327,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2d4669fd-e67d-4ac5-9c80-76c10c09e811",
"text": "The number of filters was kept at 512 ChaLearn 2016 competition DCC (G¨uc¸l¨ut¨urk et al.,\nfor each layer. In the text CNN instead we used a 2016) and evolgen (Subramaniam et al., 2016) in\nfilter size of 128, and apply dropout (p = 0.5) to Table 3. It can be seen that the result for out trithe last layer. In the video CNN we used again 512 modal method with fully back-propagation is comfilters for each layer. parable to the winners.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 9,
"total_chunks": 21,
"char_count": 436,
"word_count": 81,
"chunking_strategy": "semantic"
},
{
"chunk_id": "268ccf64-5a36-4ab2-9f29-ee71f26313bf",
"text": "MAE Big Five Personality Traits formation (Aran and Gatica-Perez, 2013). Polzehl\nModel Mean E A C N O et al. (2010) have proposed a method for personAudio .1059 .1080 .0953 .1160 .1077 .1024\nText .1132 .1177 .0977 .1206 .1167 .1135 ality recognition in speech modality on a different\nVideo .1035 .1040 .0960 .1087 .1064 .1024 corpus. The method is not based on neural network\nDLF .0967 .0970 .0893 .1049 .0979 .0947\nNNLB .0966 .0970 .0896 .1038 .0973 .0951 architecture, but they provide a similar analysis that\nNNFB .0938 .0958 .0907 .0922 .0964 .0938 supports our conclusions. Train labels avg .1165 .1194 .1009 .1261 .1209 .1153\nSince the full backpropagation experiments\nTable 2: MAE for each individual modality, fusion yields much better results than the linear combinaapproaches and average of training set labels. DLF tion model, we can conclude that different modalrefers to decision-level fusion, NNLB and NNFB ities interact with each other in a non-trivial manrefer to neural network limited backprop and full ner. Moreover, we can observe that simply adding\nback respectively. features from different modalities (represented as\nconcatenating a final representation without full\nMean Acc Big Five Personality Traits backpropagation) does not yield optimal perforModel Mean E A C N O mance. Audio .8941 .8920 .9047 .8840 .8923 .8976 Our tri-modal approach is quite extensive and\nText .8868 .8823 .9023 .8794 .8833 .8865\nVideo .8965 .8960 .9040 .8913 .8936 .8976 there are more modalities such as nationality, culDLF .9033 .9030 .9107 .8951 .9021 .9053\nNNLB .9034 .9030 .9104 .8962 .9027 .9049 tural background, age, gender, and personal interNNFB .9062 .9042 .9093 .9078 .9036 .9062 ests that can be added. All Big Five traits have\nDCC .9121 .9104 .9154 .9130 .9097 .9119\nevolgen .9133 .9145 .9157 .9135 .9098 .9130 been found to have a correlation with age (DonnelTrain labels avg .8835 .8806 .8991 .8739 .8791 .8847 lan and Lucas, 2008). Extraversion and Openness\nhave a negative correlation with age, Agreeableness\nTable 3: Mean accuracy for each individual\nhave a positive correlation, and Conscientiousness\nmodality, fusion approaches, two winners of the\nscores peak for middle age subjects. ChaLearn 2016 competition (DCC and evolgen),\nand average of training set labels. 4 Conclusion We proposed a fusion method, based on deep neural\n3.4 Discussion networks, to predict personality traits from audio,\nlanguage and appearance. We have seen that eachLooking at the results obtained from various fusion\nof the three modalities contains a signal relevant formethods in Table 2, we can see that for decisionpersonality prediction, that using all three modali-level and last-layer backpropagation, Neuroticism\nties combined greatly outperforms using individualand Extraversion are the easiest traits to predict, folmodalities, and that the channels interact with eachlowed by Conscientiousness and Openness. Agreeother in a non-trivial fashion. By combining theableness is significantly harder. We also see that the\nlast network layers and fine-tuning the parameterslast-layer fusion yields very similar performance\nwe have obtained the best result, average among allas the decision-level approach.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 10,
"total_chunks": 21,
"char_count": 3210,
"word_count": 487,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4a65a1f5-b78c-4650-b1b0-9a593604e481",
"text": "It is likely that the\ntraits, of 0.0938 Mean Square Error, which is 9.4%limited backpropagation method learns something\nbetter than the performance of the best individualsimilar to a linear combination of channels, just like\nmodality (visual). Out of all modalities, languagethe decision-level method. On the other hand, the\nor speech pattern seems to be the least relevant.full backpropagation method yields significantly\nVideo frames (appearance) are slightly more rele-higher results for all traits except Agreeableness.\nvant than audio information (i.e. non-verbal parts\nFrom Table 1 we can also see which modalities\nof speech).\ncarry more information. The text modality is not\nadding much value to traits other than Agreeable- 5 Acknowledgments\nness and Conscientiousness. Extraversion can, on\nthe other hand, be quite easily recognized from tone This work was partially funded by grants\nof voice and appearance. Having said this, we must #16214415 and #16248016 of the Hong Kong Rebe careful in deciding which modalities are most search Grants Council, ITS/319/16FP of Innovation\nsuitable for individual traits, since certain traits (e.g. Technology Commission, and RDC 1718050-0 of\nExtraversion) are more evident from a short slice EMOS.AI.\nand some (e.g. Openness) need longer temporal inReferences Yoon Kim. 2014. Convolutional neural networks for\nsentence classification.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 11,
"total_chunks": 21,
"char_count": 1381,
"word_count": 202,
"chunking_strategy": "semantic"
},
{
"chunk_id": "488a5ba6-f9b2-4f7c-a883-f04a858cf618",
"text": "In Proceedings of the 2014\nOya Aran and Daniel Gatica-Perez. 2013. One of a Conference on Empirical Methods in Natural Lankind: Inferring personality impressions in meetings. guage Processing (EMNLP). Association for ComIn Proceedings of the 15th ACM on International putational Linguistics, pages 1746–1751.\nconference on multimodal interaction. Diederik Kingma and Jimmy Ba. 2014.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 12,
"total_chunks": 21,
"char_count": 382,
"word_count": 53,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ae03c97d-058b-42fc-abf4-9b444772a9df",
"text": "Adam: A\nmethod for stochastic optimization. arXiv preprint\nPhilippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Franc¸ois Mairesse, Marilyn A Walker, Matthias R\nPattern Analysis and Machine Intelligence . Mehl, and Roger K Moore. 2007.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 13,
"total_chunks": 21,
"char_count": 279,
"word_count": 40,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fde93b6f-28ed-410c-8b11-2d56a3ec60ed",
"text": "Using linguistic\ncues for the automatic recognition of personality inDario Bertero, Farhad Bin Siddique, Chien-Sheng\nconversation and text. Journal of artificial intelli- Wu, Yan Wan, Ricky Ho Yin Chan, and Pasgence research 30:457–500. cale Fung. 2016. Real-time speech emotion\nand sentiment recognition for interactive dialogue Urs Muller, Jan Ben, Eric Cosatto, Beat Flepp, and\nsystems. In Proceedings of the 2016 Con- Yann L Cun. 2006.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 14,
"total_chunks": 21,
"char_count": 439,
"word_count": 65,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9cc9d3a1-4ae2-4182-9da8-7651f9535e40",
"text": "Off-road obstacle avoidance\nference on Empirical Methods in Natural Lan- through end-to-end learning. In Advances in neural\nguage Processing. Association for Computational information processing systems. pages 739–746. Linguistics, Austin, Texas, pages 1042–1047.\nhttps://aclweb.org/anthology/D16-1110. Clifford Nass, Youngme Moon, BJ Fogg, Byron\nReeves, and Chris Dryer. 1995.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 15,
"total_chunks": 21,
"char_count": 377,
"word_count": 45,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d82f1a5b-5da7-412f-a2dc-2a6d2f64f4ed",
"text": "Can computer perFranc¸ois Chollet et al. 2015. Keras. https://\nsonalities be human personalities. In Conference\ngithub.com/fchollet/keras.\ncompanion on Human factors in computing systems.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 16,
"total_chunks": 21,
"char_count": 187,
"word_count": 23,
"chunking_strategy": "semantic"
},
{
"chunk_id": "34039f87-d643-408a-a0cd-097948383253",
"text": "M Brent Donnellan and Richard E Lucas. 2008. Age ACM, pages 228–229.\ndifferences in the big five across the life span: eviOmkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, dence from two national samples. Psychology and\net al. 2015. Deep face recognition. In BMVC. vol- aging 23(3):558.\nume 1, page 6. Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin\nTim Polzehl, Sebastian Moller, and Florian Metze. Ko, Susan M Swetter, Helen M Blau, and Sebas-\n2010.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 17,
"total_chunks": 21,
"char_count": 455,
"word_count": 75,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d537f4f9-41a2-48f1-bed3-7b36d3b1fb4e",
"text": "Automatically assessing personality from tian Thrun. 2017. Dermatologist-level classification\nspeech. In Semantic Computing (ICSC), 2010 IEEE of skin cancer with deep neural networks. Nature\nFourth International Conference on. IEEE, pages 542(7639):115–118.\n134–140. Golnoosh Farnadi, Geetha Sitaraman, Shanu Sushmita,\nFabio Celli, Michal Kosinski, David Stillwell, Ser- V´ıctor Ponce-L´opez, Baiyu Chen, Marc Oliu, Ciprian\ngio Davalos, Marie-Francine Moens, and Martine Corneanu, Albert Clap´es, Isabelle Guyon, Xavier\nDe ***. 2016.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 18,
"total_chunks": 21,
"char_count": 533,
"word_count": 67,
"chunking_strategy": "semantic"
},
{
"chunk_id": "eaf53977-1b86-45b1-9615-46da0202b590",
"text": "Computational personality recog- Bar´o, Hugo Jair Escalante, and Sergio Escalera.\nnition in social media. User modeling and user- 2016. Chalearn lap 2016: First round chaladapted interaction 26(2-3):109–142. lenge on first impressions-dataset and results. In\nComputer Vision–ECCV 2016 Workshops.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 19,
"total_chunks": 21,
"char_count": 295,
"word_count": 38,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f8dcd9bd-f42a-4580-b983-110560fe856f",
"text": "Springer,\nLewis R Goldberg. 1990. An alternative\" description pages 400–418.\nof personality\": the big-five factor structure. Journal of personality and social psychology 59(6):1216. Karen Simonyan and Andrew Zisserman. 2014. Very\ndeep convolutional networks for large-scale image\nJair Escalante, Xavier Baro, Isabelle Guyon, Carlos\nAndujar, Julio Jacques Junior, Meysam Madadi, Ser- Arulkumar Subramaniam, Vismay Patel, Ashish\ngio Escalera, et al. 2017. Visualizing apparent per- Mishra, Prashanth Balasubramanian, and Anurag\nsonality analysis with deep residual networks. Bi-modal first impressions recognition\nProceedings of the IEEE Conference on Computer using temporally ordered deep audio and stochastic\nVision and Pattern Recognition. pages 3101–3109. visual features. In Computer Vision–ECCV 2016\nWorkshops. Springer, pages 337–348. Ya˘gmur G¨uc¸l¨ut¨urk, Umut G¨uc¸l¨u, Marcel AJ van Gerven, and Rob van Lier. 2016. Deep impres- Paul Viola and Michael J Jones. 2004. Robust real-time\nsion: audiovisual deep residual networks for mul- face detection. International journal of computer vitimodal apparent personality trait recognition. In sion 57(2):137–154. Computer Vision–ECCV 2016 Workshops.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 20,
"total_chunks": 21,
"char_count": 1202,
"word_count": 157,
"chunking_strategy": "semantic"
},
{
"chunk_id": "beeb56b0-262c-4495-82f5-29304740fbb4",
"text": "Springer,\npages 349–358. Chen-Lin Zhang, Hao Zhang, Xiu-Shen Wei, and\nJianxin Wu. 2016. Deep bimodal regression for apFurkan G¨urpınar, Heysem Kaya, and Albert Ali Salah. parent personality analysis. In Computer Vision–\n2016. Combining deep facial and ambient features ECCV 2016 Workshops. Springer, pages 311–324.\nfor first impression estimation. In Computer Vision–\nECCV 2016 Workshops. Springer, pages 372–385.",
"paper_id": "1805.00705",
"title": "Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction",
"authors": [
"Onno Kampman",
"Elham J. Barezi",
"Dario Bertero",
"Pascale Fung"
],
"published_date": "2018-05-02",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1805.00705v2",
"chunk_index": 21,
"total_chunks": 21,
"char_count": 413,
"word_count": 57,
"chunking_strategy": "semantic"
}
]