5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automated F18-FDG PET/CT image quality assessment using deep neural networks on a latest 6-ring digital detector system

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          To evaluate whether a machine learning classifier can evaluate image quality of maximum intensity projection (MIP) images from F18-FDG-PET scans. A total of 400 MIP images from F18-FDG-PET with simulated decreasing acquisition time (120 s, 90 s, 60 s, 30 s and 15 s per bed-position) using block sequential regularized expectation maximization (BSREM) with a beta-value of 450 and 600 were created. A machine learning classifier was fed with 283 images rated “sufficient image quality” and 117 images rated “insufficient image quality”. The classification performance of the machine learning classifier was assessed by calculating sensitivity, specificity, and area under the receiver operating characteristics curve (AUC) using reader-based classification as the target. Classification performance of the machine learning classifier was AUC 0.978 for BSREM beta 450 and 0.967 for BSREM beta 600. The algorithm showed a sensitivity of 89% and 94% and a specificity of 94% and 94% for the reconstruction BSREM 450 and 600, respectively. Automated assessment of image quality from F18-FDG-PET images using a machine learning classifier provides equivalent performance to manual assessment by experienced radiologists.

          Related collections

          Most cited references21

          • Record: found
          • Abstract: found
          • Article: not found

          FDG PET/CT: EANM procedure guidelines for tumour imaging: version 2.0

          The purpose of these guidelines is to assist physicians in recommending, performing, interpreting and reporting the results of FDG PET/CT for oncological imaging of adult patients. PET is a quantitative imaging technique and therefore requires a common quality control (QC)/quality assurance (QA) procedure to maintain the accuracy and precision of quantitation. Repeatability and reproducibility are two essential requirements for any quantitative measurement and/or imaging biomarker. Repeatability relates to the uncertainty in obtaining the same result in the same patient when he or she is examined more than once on the same system. However, imaging biomarkers should also have adequate reproducibility, i.e. the ability to yield the same result in the same patient when that patient is examined on different systems and at different imaging sites. Adequate repeatability and reproducibility are essential for the clinical management of patients and the use of FDG PET/CT within multicentre trials. A common standardised imaging procedure will help promote the appropriate use of FDG PET/CT imaging and increase the value of publications and, therefore, their contribution to evidence-based medicine. Moreover, consistency in numerical values between platforms and institutes that acquire the data will potentially enhance the role of semiquantitative and quantitative image interpretation. Precision and accuracy are additionally important as FDG PET/CT is used to evaluate tumour response as well as for diagnosis, prognosis and staging. Therefore both the previous and these new guidelines specifically aim to achieve standardised uptake value harmonisation in multicentre settings.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found
            Is Open Access

            Fastai: A Layered API for Deep Learning

            fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions. These abstractions can be expressed concisely and clearly by leveraging the dynamism of the underlying Python language and the flexibility of the PyTorch library. fastai includes: a new type dispatch system for Python along with a semantic type hierarchy for tensors; a GPU-optimized computer vision library which can be extended in pure Python; an optimizer which refactors out the common functionality of modern optimizers into two basic pieces, allowing optimization algorithms to be implemented in 4–5 lines of code; a novel 2-way callback system that can access any part of the data, model, or optimizer and change it at any point during training; a new data block API; and much more. We used this library to successfully create a complete deep learning course, which we were able to write more quickly than using previous approaches, and the code was more clear. The library is already in wide use in research, industry, and teaching.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer’s Disease using structural MR and FDG-PET images

              Alzheimer’s Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve early diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1–3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature.
                Bookmark

                Author and article information

                Contributors
                michael.messerli@usz.ch
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                13 July 2023
                13 July 2023
                2023
                : 13
                : 11332
                Affiliations
                [1 ]GRID grid.412004.3, ISNI 0000 0004 0478 9977, Department of Nuclear Medicine, , University Hospital Zurich, ; Rämistrasse 100, 8091 Zurich, Switzerland
                [2 ]GRID grid.412004.3, ISNI 0000 0004 0478 9977, Institute of Diagnostic and Interventional Radiology, , University Hospital Zurich, ; Zurich, Switzerland
                [3 ]GRID grid.7400.3, ISNI 0000 0004 1937 0650, University of Zurich, ; Zurich, Switzerland
                [4 ]GRID grid.5801.c, ISNI 0000 0001 2156 2780, Institute of Food, Nutrition and Health, , Health Sciences and Technology, ETH Zurich, ; Zurich, Switzerland
                [5 ]GRID grid.414079.f, ISNI 0000 0004 0568 6320, Department of Radiology and Nuclear Medicine, , Children’s Hospital of Eastern Switzerland, ; St. Gallen, Switzerland
                [6 ]GRID grid.412004.3, ISNI 0000 0004 0478 9977, Department of Medical Oncology, , University Hospital Zurich, ; Zurich, Switzerland
                Article
                37182
                10.1038/s41598-023-37182-1
                10344880
                37443158
                497353fa-800c-4b6b-84bb-99204157f235
                © The Author(s) 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 1 February 2022
                : 17 June 2023
                Funding
                Funded by: MedLab Fellowship at ETH Zurich
                Funded by: Palatin-Foundation
                Funded by: FundRef http://dx.doi.org/10.13039/501100004362, Schweizerische Herzstiftung;
                Award ID: FF19097
                Award Recipient :
                Funded by: Swiss Academy of Medical Sciences
                Funded by: Gottfried and Julia Bangerter-Rhyner Foundation
                Funded by: CRPP AI Oncological Imaging Network of the University of Zurich
                Funded by: FundRef http://dx.doi.org/10.13039/100006775, GE Healthcare;
                Funded by: Alfred and Annemarie von Sick legacy for translational and clinical cardiac and oncological research
                Funded by: Iten-Kohaut Foundation
                Categories
                Article
                Custom metadata
                © Springer Nature Limited 2023

                Uncategorized
                medical research,molecular medicine,cancer imaging
                Uncategorized
                medical research, molecular medicine, cancer imaging

                Comments

                Comment on this article