1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network

      , , ,
      Applied Sciences
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Extraction of mandibular third molars is a common procedure in oral and maxillofacial surgery. There are studies that simultaneously predict the extraction difficulty of mandibular third molar and the complications that may occur. Thus, we propose a method of automatically detecting mandibular third molars in the panoramic radiographic images and predicting the extraction difficulty and likelihood of inferior alveolar nerve (IAN) injury. Our dataset consists of 4903 panoramic radiographic images acquired from various dental hospitals. Seven dentists annotated detection and classification labels. The detection model determines the mandibular third molar in the panoramic radiographic image. The region of interest (ROI) includes the detected mandibular third molar, adjacent teeth, and IAN, which is cropped in the panoramic radiographic image. The classification models use ROI as input to predict the extraction difficulty and likelihood of IAN injury. The achieved detection performance was 99.0% mAP over the intersection of union (IOU) 0.5. In addition, we achieved an 83.5% accuracy for the prediction of extraction difficulty and an 81.1% accuracy for the prediction of the likelihood of IAN injury. We demonstrated that a deep learning method can support the diagnosis for extracting the mandibular third molar.

          Related collections

          Most cited references48

          • Record: found
          • Abstract: found
          • Article: not found

          Adam: A Method for Stochastic Optimization

          We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning

            Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

              While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. Fine-tuning code and pre-trained models are available at https://github.com/google-research/vision_transformer. ICLR camera-ready version with 2 small modifications: 1) Added a discussion of CLS vs GAP classifier in the appendix, 2) Fixed an error in exaFLOPs computation in Figure 5 and Table 6 (relative performance of models is basically not affected)
                Bookmark

                Author and article information

                Contributors
                Journal
                ASPCC7
                Applied Sciences
                Applied Sciences
                MDPI AG
                2076-3417
                January 2022
                January 04 2022
                : 12
                : 1
                : 475
                Article
                10.3390/app12010475
                37734774-0da7-4493-acd6-cd66f5e1e141
                © 2022

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article