5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Variational Autoencoders-BasedSelf-Learning Model for Tumor Identification and Impact Analysis from 2-D MRI Images

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Over the past few years, a tremendous change has occurred in computer-aided diagnosis (CAD) technology. The evolution of numerous medical imaging techniques has enhanced the accuracy of the preliminary analysis of several diseases. Magnetic resonance imaging (MRI) is a prevalent technology extensively used in evaluating the progress of the spread of malignant tissues or abnormalities in the human body. This article aims to automate a computationally efficient mechanism that can accurately identify the tumor from MRI images and can analyze the impact of the tumor. The proposed model is robust enough to classify the tumors with minimal training data. The generative variational autoencoder models are efficient in reconstructing the images identical to the original images, which are used in adequately training the model. The proposed self-learning algorithm can learn from the insights from the autogenerated images and the original images. Incorporating long short-term memory (LSTM) is faster processing of the high dimensional imaging data, making the radiologist's task and the practitioners more comfortable assessing the tumor's progress. Self-learning models need comparatively less data for the training, and the models are more resource efficient than the various state-of-art models. The efficiency of the proposed model has been assessed using various benchmark metrics, and the obtained results have exhibited an accuracy of 89.7%. The analysis of the progress of tumor growth is presented in the current study. The obtained accuracy is not pleasing in the healthcare domain, yet the model is reasonably fair in dealing with a smaller size dataset by making use of an image generation mechanism. The study would outline the role of an autoencoder in self-learning models. Future technologies may include sturdy feature engineering models and optimized activation functions that would yield a better result.

          Related collections

          Most cited references48

          • Record: found
          • Abstract: found
          • Article: not found

          The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

          In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Brain tumor segmentation with Deep Neural Networks

            In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM

              Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Healthc Eng
                J Healthc Eng
                JHE
                Journal of Healthcare Engineering
                Hindawi
                2040-2295
                2040-2309
                2023
                17 January 2023
                : 2023
                : 1566123
                Affiliations
                1Department of Computer Science and Engineering, Prasad V Potluri Siddhartha Institute of Technology, Vijayawada, Andhra Pradesh 520007, India
                2Department of Computer Science and Engineering, Dhanekula Institute of Engineering and Technology, Vijayawada, Andhra Pradesh 521139, India
                3Department of Computer Science, College of Computer Sciences and Information Technology, King Faisal University, Al-Ahsa 31982, Saudi Arabia
                4Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa 31982, Saudi Arabia
                5Department of Information Systems—College of Computer and Information Science, King Saud University, Riyadh, Saudi Arabia
                Author notes

                Academic Editor: Ayush Dogra

                Author information
                https://orcid.org/0000-0001-9247-9132
                https://orcid.org/0000-0002-0528-8459
                https://orcid.org/0000-0003-1155-0991
                https://orcid.org/0000-0003-2400-616X
                https://orcid.org/0000-0002-6598-6240
                https://orcid.org/0000-0001-7191-2099
                Article
                10.1155/2023/1566123
                9873460
                36704578
                bac7a087-5275-4c2d-92f1-c1bdb247b70c
                Copyright © 2023 Parvathaneni Naga Srinivasu et al.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 18 October 2022
                : 13 December 2022
                : 7 January 2023
                Funding
                Funded by: King Faisal University
                Award ID: 1929
                Categories
                Research Article

                Comments

                Comment on this article