2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A review of PET attenuation correction methods for PET-MR

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.

          Related collections

          Most cited references228

          • Record: found
          • Abstract: found
          • Article: not found

          elastix: a toolbox for intensity-based medical image registration.

          Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation

            We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              MR-based synthetic CT generation using a deep convolutional neural network method.

              Xiao Han (2017)
              Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images.
                Bookmark

                Author and article information

                Contributors
                georgios.krokos@kcl.ac.uk
                Journal
                EJNMMI Phys
                EJNMMI Phys
                EJNMMI Physics
                Springer International Publishing (Cham )
                2197-7364
                11 September 2023
                11 September 2023
                December 2023
                : 10
                : 52
                Affiliations
                GRID grid.13097.3c, ISNI 0000 0001 2322 6764, School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas’ Hospital London, , King’s College London, ; 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH UK
                Author information
                http://orcid.org/0000-0002-9776-4714
                http://orcid.org/0000-0002-0936-9993
                http://orcid.org/0000-0001-9892-9640
                Article
                569
                10.1186/s40658-023-00569-0
                10495310
                37695384
                16b0bc35-5651-4f64-b6ed-d3215ea06be2
                © Springer Nature Switzerland AG 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 14 April 2023
                : 7 August 2023
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100010269, Wellcome Trust;
                Award ID: WT 203148/A/16/Z
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100023312, Centre For Medical Engineering, King’s College London;
                Award ID: WT 203148/A/16/Z
                Award Recipient :
                Categories
                Review
                Custom metadata
                © Springer Nature Switzerland AG 2023

                pet-mr,attenuation correction,dixon,ute,mlaa,atlas-based attenuation correction,deep-learning-based attenuation correction,pseudo-ct

                Comments

                Comment on this article