9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automated detection of third molars and mandibular nerve by deep learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The approximity of the inferior alveolar nerve (IAN) to the roots of lower third molars (M3) is a risk factor for the occurrence of nerve damage and subsequent sensory disturbances of the lower lip and chin following the removal of third molars. To assess this risk, the identification of M3 and IAN on dental panoramic radiographs (OPG) is mandatory. In this study, we developed and validated an automated approach, based on deep-learning, to detect and segment the M3 and IAN on OPGs. As a reference, M3s and IAN were segmented manually on 81 OPGs. A deep-learning approach based on U-net was applied on the reference data to train the convolutional neural network (CNN) in the detection and segmentation of the M3 and IAN. Subsequently, the trained U-net was applied onto the original OPGs to detect and segment both structures. Dice-coefficients were calculated to quantify the degree of similarity between the manually and automatically segmented M3s and IAN. The mean dice-coefficients for M3s and IAN were 0.947 ± 0.033 and 0.847 ± 0.099, respectively. Deep-learning is an encouraging approach to segment anatomical structures and later on in clinical decision making, though further enhancement of the algorithm is advised to improve the accuracy.

          Related collections

          Most cited references21

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

          Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The radiological prediction of inferior alveolar nerve injury during third molar surgery.

              The surgical removal of an impacted mandibular third molar may result in damage to the inferior alveolar nerve and may cause disabling anaesthesia of the lip; anaesthesia of the lower gingivae and anterior teeth may also result. Assessing the likelihood of injury depends to a great extent on preoperative radiographic examination. Seven radiological diagnostic signs have been mentioned in the literature; the reliability of these signs as predictors of likely nerve injury have been evaluated through retrospective and prospective surveys. Three signs were found to be significantly related to nerve injury and a further two were probably important clinically.
                Bookmark

                Author and article information

                Contributors
                Tong.Xi@radboudumc.nl
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                21 June 2019
                21 June 2019
                2019
                : 9
                : 9007
                Affiliations
                [1 ]ISNI 0000 0004 0444 9382, GRID grid.10417.33, Department of Oral and Maxillofacial Surgery, , Radboud University Nijmegen Medical Centre, ; Nijmegen, The Netherlands
                [2 ]ISNI 0000 0004 0444 9382, GRID grid.10417.33, Department of Neurosurgery, , Radboud University Nijmegen Medical Centre, ; Nijmegen, The Netherlands
                [3 ]ISNI 0000 0004 0444 9382, GRID grid.10417.33, Radboudumc 3D Lab, Radboud University Medical Centre, ; Nijmegen, The Netherlands
                Article
                45487
                10.1038/s41598-019-45487-3
                6588560
                31227772
                26b095e4-4dc5-4475-bee1-37492c275d30
                © The Author(s) 2019

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 11 December 2018
                : 29 May 2019
                Categories
                Article
                Custom metadata
                © The Author(s) 2019

                Uncategorized
                translational research,outcomes research
                Uncategorized
                translational research, outcomes research

                Comments

                Comment on this article