0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Enhancing ECG classification with continuous wavelet transform and multi-branch transformer

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Accurate classification of electrocardiogram (ECG) signals is crucial for automatic diagnosis of heart diseases. However, existing ECG classification methods often require complex preprocessing and denoising operations, and traditional convolutional neural network (CNN)-based methods struggle to capture complex relationships and high-level time-series features.

          Method

          In this study, we propose an ECG classification method based on continuous wavelet transform and multi-branch transformer. The method utilizes continuous wavelet transform (CWT) to convert the ECG signal into time-series feature map, eliminating the need for complicated preprocessing. Additionally, the multi-branch transformer is introduced to enhance feature extraction during model training and improve classification performance by removing redundant information while preserving important features.

          Results

          The proposed method was evaluated on the CPSC 2018 (6877 cases) and MIT-BIH (47 cases) ECG public datasets, achieving an accuracy of 98.53% and 99.38%, respectively, with F1 scores of 97.57% and 98.65%. These results outperformed most existing methods, demonstrating the excellent performance of the proposed method.

          Conclusion

          The proposed method accurately classifies the ECG time-series feature map, which holds promise for the diagnosis of cardiac arrhythmias. The findings of this study are valuable for advancing the field of automatic ECG diagnosis.

          Related collections

          Most cited references49

          • Record: found
          • Abstract: not found
          • Article: not found

          Gradient-based learning applied to document recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Attention Is All You Need

            The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. 15 pages, 5 figures
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Very Deep Convolutional Networks for Large-Scale Image Recognition

              In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
                Bookmark

                Author and article information

                Contributors
                Journal
                Heliyon
                Heliyon
                Heliyon
                Elsevier
                2405-8440
                21 February 2024
                15 March 2024
                21 February 2024
                : 10
                : 5
                : e26147
                Affiliations
                [1]School of Information Technology, Yunnan University, Kunming, China
                Author notes
                []Corresponding author. yndxqcy@ 123456163.com
                Article
                S2405-8440(24)02178-9 e26147
                10.1016/j.heliyon.2024.e26147
                10906304
                38434292
                0a4fdc28-4024-4544-956b-dbde86d6d502
                © 2024 The Authors. Published by Elsevier Ltd.

                This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

                History
                : 8 April 2023
                : 28 January 2024
                : 8 February 2024
                Categories
                Research Article

                arrhythmia,multi-branch transformer,continuous wavelet transform,convolutional neural network,time-series feature map

                Comments

                Comment on this article