11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Multimodal sentiment analysis is an important research area that predicts speaker's sentiment tendency through features extracted from textual, visual and acoustic modalities. The central challenge is the fusion method of the multimodal information. A variety of fusion methods have been proposed, but few of them adopt end-to-end translation models to mine the subtle correlation between modalities. Enlightened by recent success of Transformer in the area of machine translation, we propose a new fusion method, TransModality, to address the task of multimodal sentiment analysis. We assume that translation between modalities contributes to a better joint representation of speaker's utterance. With Transformer, the learned features embody the information both from the source modality and the target modality. We validate our model on multiple multimodal datasets: CMU-MOSI, MELD, IEMOCAP. The experiments show that our proposed method achieves the state-of-the-art performance.

          Related collections

          Author and article information

          Journal
          07 September 2020
          Article
          2009.02902
          8f9aafec-c2fc-4a65-b762-be154ded8c33

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Proceedings of The Web Conference 2020
          cs.CL

          Theoretical computer science
          Theoretical computer science

          Comments

          Comment on this article