13
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Valence-arousal classification of emotion evoked by Chinese ancient-style music using 1D-CNN-BiLSTM model on EEG signals for college students

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          During the COVID-19 pandemic, young people are using multimedia content more frequently to communicate with each other on Internet platforms. Among them, music, as psychological support for a lonely life in this special period, is a powerful tool for emotional self-regulation and getting rid of loneliness. More and more attention has been paid to the music recommender system based on emotion. In recent years, Chinese music has tended to be considered an independent genre. Chinese ancient-style music is one of the new folk music styles in Chinese music and is becoming more and more popular among young people. The complexity of Chinese-style music brings significant challenges to the quantitative calculation of music. To effectively solve the problem of emotion classification in music information search, emotion is often characterized by valence and arousal. This paper focuses on the valence and arousal classification of Chinese ancient-style music-evoked emotion. It proposes a hybrid one-dimensional convolutional neural network and bidirectional and unidirectional long short-term memory model (1D-CNN-BiLSTM). And a self-acquisition EEG dataset for Chinese college students was designed to classify music-induced emotion by valence-arousal based on EEG. In addition to that, the proposed 1D-CNN-BILSTM model verified the performance of public datasets DEAP and DREAMER, as well as the self-acquisition dataset DESC. The experimental results show that, compared with traditional LSTM and 1D-CNN-LSTM models, the proposed method has the highest accuracy in the valence classification task of music-induced emotion, reaching 94.85%, 98.41%, and 99.27%, respectively. The accuracy of the arousal classification task also gained 93.40%, 98.23%, and 99.20%, respectively. In addition, compared with the positive valence classification results of emotion, this method has obvious advantages in negative valence classification. This study provides a computational classification model for a music recommender system with emotion. It also provides some theoretical support for the brain-computer interactive (BCI) application products of Chinese ancient-style music which is popular among young people.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: not found

          DEAP: A Database for Emotion Analysis ;Using Physiological Signals

          IEEE Transactions on Affective Computing, 3(1), 18-31
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals From Wireless Low-cost Off-the-Shelf Devices

            In this paper, we present DREAMER, a multimodal database consisting of electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation by means of audio-visual stimuli. Signals from 23 participants were recorded along with the participants self-assessment of their affective state after each stimuli, in terms of valence, arousal, and dominance. All the signals were captured using portable, wearable, wireless, low-cost, and off-the-shelf equipment that has the potential to allow the use of affective computing methods in everyday applications. A baseline for participant-wise affect recognition using EEG and ECG-based features, as well as their fusion, was established through supervised classification experiments using support vector machines (SVMs). The self-assessment of the participants was evaluated through comparison with the self-assessments from another study using the same audio-visual stimuli. Classification results for valence, arousal, and dominance of the proposed database are comparable to the ones achieved for other databases that use nonportable, expensive, medical grade devices. These results indicate the prospects of using low-cost devices for affect recognition applications. The proposed database will be made publicly available in order to allow researchers to achieve a more thorough evaluation of the suitability of these capturing devices for affect recognition applications.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Emotion Recognition based on EEG using LSTM Recurrent Neural Network

                Bookmark

                Author and article information

                Contributors
                dury@njupt.edu.cn
                shujinzhu@njupt.edu.cn
                Journal
                Multimed Tools Appl
                Multimed Tools Appl
                Multimedia Tools and Applications
                Springer US (New York )
                1380-7501
                1573-7721
                4 October 2022
                : 1-18
                Affiliations
                [1 ]GRID grid.453246.2, ISNI 0000 0004 0369 3615, School of Geographic and Biologic Information, , Nanjing University of Posts and Telecommunications, ; Nanjing, China
                [2 ]Smart Health Big Data Analysis and Location Services Engineering Laboratory of Jiangsu Province, Nanjing, China
                [3 ]GRID grid.410561.7, ISNI 0000 0001 0169 5113, School of Electronics and Information Engineering, , Tianjin Polytechnic University, ; Tianjin, China
                Author information
                http://orcid.org/0000-0002-8268-9272
                Article
                14011
                10.1007/s11042-022-14011-7
                9530425
                36213341
                398d76a0-f26a-4b96-b7c4-bd8d372c263b
                © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

                This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.

                History
                : 12 January 2022
                : 22 June 2022
                : 23 September 2022
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100001809, National Natural Science Foundation of China;
                Award ID: 61977039
                Categories
                Article

                Graphics & Multimedia design
                eeg,chinese music,emotion classification,bilstm
                Graphics & Multimedia design
                eeg, chinese music, emotion classification, bilstm

                Comments

                Comment on this article