14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks

      research-article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The use of convolutional neural networks (CNNs) has dramatically advanced our ability to recognize images with machine learning methods. We aimed to construct a CNN that could recognize the anatomical location of esophagogastroduodenoscopy (EGD) images in an appropriate manner. A CNN-based diagnostic program was constructed based on GoogLeNet architecture, and was trained with 27,335 EGD images that were categorized into four major anatomical locations (larynx, esophagus, stomach and duodenum) and three subsequent sub-classifications for stomach images (upper, middle, and lower regions). The performance of the CNN was evaluated in an independent validation set of 17,081 EGD images by drawing receiver operating characteristics (ROC) curves and calculating the area under the curves (AUCs). ROC curves showed high performance of the trained CNN to classify the anatomical location of EGD images with AUCs of 1.00 for larynx and esophagus images, and 0.99 for stomach and duodenum images. Furthermore, the trained CNN could recognize specific anatomical locations within the stomach, with AUCs of 0.99 for the upper, middle, and lower stomach. In conclusion, the trained CNN showed robust performance in its ability to recognize the anatomical location of EGD images, highlighting its significant potential for future application as a computer-aided EGD diagnostic system.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

          Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and the revival of deep CNN. CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR

            Multiparametric Magnetic Resonance Imaging (MRI) can provide detailed information of the physical characteristics of rectum tumours. Several investigations suggest that volumetric analyses on anatomical and functional MRI contain clinically valuable information. However, manual delineation of tumours is a time consuming procedure, as it requires a high level of expertise. Here, we evaluate deep learning methods for automatic localization and segmentation of rectal cancers on multiparametric MR imaging. MRI scans (1.5T, T2-weighted, and DWI) of 140 patients with locally advanced rectal cancer were included in our analysis, equally divided between discovery and validation datasets. Two expert radiologists segmented each tumor. A convolutional neural network (CNN) was trained on the multiparametric MRIs of the discovery set to classify each voxel into tumour or non-tumour. On the independent validation dataset, the CNN showed high segmentation accuracy for reader1 (Dice Similarity Coefficient (DSC = 0.68) and reader2 (DSC = 0.70). The area under the curve (AUC) of the resulting probability maps was very high for both readers, AUC = 0.99 (SD = 0.05). Our results demonstrate that deep learning can perform accurate localization and segmentation of rectal cancer in MR imaging in the majority of patients. Deep learning technologies have the potential to improve the speed and accuracy of MRI-based rectum segmentations.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Big Data and machine learning in radiation oncology: State of the art and future prospects

                Bookmark

                Author and article information

                Contributors
                tsuozawa244@gmail.com
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                14 May 2018
                14 May 2018
                2018
                : 8
                : 7497
                Affiliations
                [1 ]Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
                [2 ]ISNI 0000 0001 2151 536X, GRID grid.26999.3d, Department of Surgical Oncology, , Graduate School of Medicine, The University of Tokyo, ; Tokyo, Japan
                [3 ]ISNI 0000 0004 0531 3030, GRID grid.411731.1, Department of surgery, , Sanno Hospital, International University of Health and Welfare, ; Tokyo, Japan
                [4 ]ISNI 0000 0001 2151 536X, GRID grid.26999.3d, Department of Gastroenterology, , Graduate School of Medicine, the University of Tokyo, ; Tokyo, Japan
                [5 ]Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
                [6 ]ISNI 0000 0001 2151 536X, GRID grid.26999.3d, Department of Global Health Policy, , Graduate School of Medicine, The University of Tokyo, ; Tokyo, Japan
                [7 ]ISNI 0000 0001 2113 8111, GRID grid.7445.2, Department of Epidemiology and Biostatistics, , School of Public Health, Imperial College London, ; London, United Kingdom
                [8 ]ISNI 0000 0000 9239 9995, GRID grid.264706.1, Graduate School of Public Health, Teikyo University, ; Tokyo, Japan
                Author information
                http://orcid.org/0000-0002-5750-0976
                Article
                25842
                10.1038/s41598-018-25842-6
                5951793
                29760397
                5341aaef-dd20-4c6b-beb6-e46490378c77
                © The Author(s) 2018

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 7 December 2017
                : 30 April 2018
                Categories
                Article
                Custom metadata
                © The Author(s) 2018

                Uncategorized
                Uncategorized

                Comments

                Comment on this article