3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep learning–based MR‐to‐CT synthesis: The influence of varying gradient echo–based MR images as input channels

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          To study the influence of gradient echo–based contrasts as input channels to a 3D patch‐based neural network trained for synthetic CT (sCT) generation in canine and human populations.

          Methods

          Magnetic resonance images and CT scans of human and canine pelvic regions were acquired and paired using nonrigid registration. Magnitude MR images and Dixon reconstructed water, fat, in‐phase and opposed‐phase images were obtained from a single T 1‐weighted multi‐echo gradient‐echo acquisition. From this set, 6 input configurations were defined, each containing 1 to 4 MR images regarded as input channels. For each configuration, a UNet‐derived deep learning model was trained for synthetic CT generation. Reconstructed Hounsfield unit maps were evaluated with peak SNR, mean absolute error, and mean error. Dice similarity coefficient and surface distance maps assessed the geometric fidelity of bones. Repeatability was estimated by replicating the training up to 10 times.

          Results

          Seventeen canines and 23 human subjects were included in the study. Performance and repeatability of single‐channel models were dependent on the TE‐related water–fat interference with variations of up to 17% in mean absolute error, and variations of up to 28% specifically in bones. Repeatability, Dice similarity coefficient, and mean absolute error were statistically significantly better in multichannel models with mean absolute error ranging from 33 to 40 Hounsfield units in humans and from 35 to 47 Hounsfield units in canines.

          Conclusion

          Significant differences in performance and robustness of deep learning models for synthetic CT generation were observed depending on the input. In‐phase images outperformed opposed‐phase images, and Dixon reconstructed multichannel inputs outperformed single‐channel inputs.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: not found

          MR-based synthetic CT generation using a deep convolutional neural network method.

          Xiao Han (2017)
          Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            A review of substitute CT generation for MRI-only radiation therapy

            Radiotherapy based on magnetic resonance imaging as the sole modality (MRI-only RT) is an area of growing scientific interest due to the increasing use of MRI for both target and normal tissue delineation and the development of MR based delivery systems. One major issue in MRI-only RT is the assignment of electron densities (ED) to MRI scans for dose calculation and a similar need for attenuation correction can be found for hybrid PET/MR systems. The ED assigned MRI scan is here named a substitute CT (sCT). In this review, we report on a collection of typical performance values for a number of main approaches encountered in the literature for sCT generation as compared to CT. A literature search in the Scopus database resulted in 254 papers which were included in this investigation. A final number of 50 contributions which fulfilled all inclusion criteria were categorized according to applied method, MRI sequence/contrast involved, number of subjects included and anatomical site investigated. The latter included brain, torso, prostate and phantoms. The contributions geometric and/or dosimetric performance metrics were also noted. The majority of studies are carried out on the brain for 5–10 patients with PET/MR applications in mind using a voxel based method. T1 weighted images are most commonly applied. The overall dosimetric agreement is in the order of 0.3–2.5%. A strict gamma criterion of 1% and 1mm has a range of passing rates from 68 to 94% while less strict criteria show pass rates > 98%. The mean absolute error (MAE) is between 80 and 200 HU for the brain and around 40 HU for the prostate. The Dice score for bone is between 0.5 and 0.95. The specificity and sensitivity is reported in the upper 80s% for both quantities and correctly classified voxels average around 84%. The review shows that a variety of promising approaches exist that seem clinical acceptable even with standard clinical MRI sequences. A consistent reference frame for method benchmarking is probably necessary to move the field further towards a widespread clinical implementation.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Deep Learning MR Imaging–based Attenuation Correction for PET/MR Imaging

              Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
                Bookmark

                Author and article information

                Contributors
                m.c.florkow@umcutrecht.nl
                Journal
                Magn Reson Med
                Magn Reson Med
                10.1002/(ISSN)1522-2594
                MRM
                Magnetic Resonance in Medicine
                John Wiley and Sons Inc. (Hoboken )
                0740-3194
                1522-2594
                08 October 2019
                April 2020
                : 83
                : 4 ( doiID: 10.1002/mrm.v83.4 )
                : 1429-1441
                Affiliations
                [ 1 ] Image Sciences Institute University Medical Center Utrecht Utrecht Netherlands
                [ 2 ] Department of Orthopedics University Medical Center Utrecht Utrecht Netherlands
                [ 3 ] Department of Radiotherapy Division of Imaging & Oncology University Medical Center Utrecht Utrecht Netherlands
                [ 4 ] Computational Imaging Group for MR diagnostics & Therapy Center for Image Sciences University Medical Center Utrecht Utrecht Netherlands
                [ 5 ] MRIguidance B.V Utrecht Netherlands
                Author notes
                [*] [* ] Correspondence

                Mateusz C. Florkow, Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, Utrecht 3508 GA, Netherlands.

                Email: m.c.florkow@ 123456umcutrecht.nl

                Author information
                https://orcid.org/0000-0003-2520-7705
                https://orcid.org/0000-0002-9184-7666
                https://orcid.org/0000-0003-0347-3375
                Article
                MRM28008
                10.1002/mrm.28008
                6972695
                31593328
                e120c611-22bf-4280-8003-10981c66c49f
                © 2019 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                History
                : 12 May 2019
                : 30 August 2019
                : 31 August 2019
                Page count
                Figures: 7, Tables: 2, Pages: 13, Words: 15690
                Funding
                Funded by: Stichting voor de Technische Wetenschappen , open-funder-registry 10.13039/501100003958;
                Award ID: 15479
                Categories
                Full Paper
                Full Papers—Computer Processing and Modeling
                Custom metadata
                2.0
                April 2020
                Converter:WILEY_ML3GV2_TO_JATSPMC version:5.7.5 mode:remove_FC converted:21.01.2020

                Radiology & Imaging
                deep learning,gradient echo,mr contrasts,synthetic ct
                Radiology & Imaging
                deep learning, gradient echo, mr contrasts, synthetic ct

                Comments

                Comment on this article