8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background: Alzheimer’s, the predominant formof dementia, is a neurodegenerative brain disorder with no known cure. With the lack of innovative findings to diagnose and treat Alzheimer’s, the number of middle-aged people with dementia is estimated to hike nearly to 13 million by the end of 2050. The estimated cost of Alzheimer’s and other related ailments is USD321 billion in 2022 and can rise above USD1 trillion by the end of 2050. Therefore, the early prediction of such diseases using computer-aided systems is a topic of considerable interest and substantial study among scholars. The major objective is to develop a comprehensive framework for the earliest onset and categorization of different phases of Alzheimer’s. Methods: Experimental work of this novel approach is performed by implementing neural networks (CNN) on MRI image datasets. Five classes of Alzheimer’s disease subjects are multi-classified. We used the transfer learning determinant to reap the benefits of pre-trained health data classification models such as the MobileNet. Results: For the evaluation and comparison of the proposed model, various performance metrics are used. The test results reveal that the CNN architectures method has the following characteristics: appropriate simple structures that mitigate computational burden, memory usage, and overfitting, as well as offering maintainable time. The MobileNet pre-trained model has been fine-tuned and has achieved 96.6 percent accuracy for multi-class AD stage classifications. Other models, such as VGG16 and ResNet50 models, are applied tothe same dataset whileconducting this research, and it is revealed that this model yields better results than other models. Conclusion: The study develops a novel framework for the identification of different AD stages. The main advantage of this novel approach is the creation of lightweight neural networks. MobileNet model is mostly used for mobile applications and was rarely used for medical image analysis; hence, we implemented this model for disease detection andyieldedbetter results than existing models.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: found
          • Article: not found

          Representation learning: a review and new perspectives.

          The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Research criteria for the diagnosis of Alzheimer's disease: revising the NINCDS-ADRDA criteria.

            The NINCDS-ADRDA and the DSM-IV-TR criteria for Alzheimer's disease (AD) are the prevailing diagnostic standards in research; however, they have now fallen behind the unprecedented growth of scientific knowledge. Distinctive and reliable biomarkers of AD are now available through structural MRI, molecular neuroimaging with PET, and cerebrospinal fluid analyses. This progress provides the impetus for our proposal of revised diagnostic criteria for AD. Our framework was developed to capture both the earliest stages, before full-blown dementia, as well as the full spectrum of the illness. These new criteria are centred on a clinical core of early and significant episodic memory impairment. They stipulate that there must also be at least one or more abnormal biomarkers among structural neuroimaging with MRI, molecular neuroimaging with PET, and cerebrospinal fluid analysis of amyloid beta or tau proteins. The timeliness of these criteria is highlighted by the many drugs in development that are directed at changing pathogenesis, particularly at the production and clearance of amyloid beta as well as at the hyperphosphorylation state of tau. Validation studies in existing and prospective cohorts are needed to advance these criteria and optimise their sensitivity, specificity, and accuracy.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.

              For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                ELECGJ
                Electronics
                Electronics
                MDPI AG
                2079-9292
                January 2023
                January 16 2023
                : 12
                : 2
                : 469
                Article
                10.3390/electronics12020469
                54b72518-cc79-4e50-8d0d-d9841d50e890
                © 2023

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article