6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Motion correction and its impact on quantification in dynamic total-body 18F-fluorodeoxyglucose PET

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          The total-body positron emission tomography (PET) scanner provides an unprecedented opportunity to scan the whole body simultaneously, thanks to its long axial field of view and ultrahigh temporal resolution. To fully utilize this potential in clinical settings, a dynamic scan would be necessary to obtain the desired kinetic information from scan data. However, in a long dynamic acquisition, patient movement can degrade image quality and quantification accuracy.

          Methods

          In this work, we demonstrated a motion correction framework and its importance in dynamic total-body FDG PET imaging. Dynamic FDG scans from 12 subjects acquired on a uEXPLORER PET/CT were included. In these subjects, 7 are healthy subjects and 5 are those with tumors in the thorax and abdomen. All scans were contaminated by motion to some degree, and for each the list-mode data were reconstructed into 1-min frames. The dynamic frames were aligned to a reference position by sequentially registering each frame to its previous neighboring frame. We parametrized the motion fields in-between frames as diffeomorphism, which can map the shape change of the object smoothly and continuously in time and space. Diffeomorphic representations of motion fields were derived by registering neighboring frames using large deformation diffeomorphic metric matching. When all pairwise registrations were completed, the motion field at each frame was obtained by concatenating the successive motion fields and transforming that frame into the reference position. The proposed correction method was labeled SyN-seq. The method that was performed similarly, but aligned each frame to a designated middle frame, was labeled as SyN-mid. Instead of SyN, the method that performed the sequential affine registration was labeled as Aff-seq. The original uncorrected images were labeled as NMC. Qualitative and quantitative analyses were performed to compare the performance of the proposed method with that of other correction methods and uncorrected images.

          Results

          The results indicated that visual improvement was achieved after correction of the SUV images for the motion present period, especially in the brain and abdomen. For subjects with tumors, the average improvement in tumor SUVmean was 5.35 ± 4.92% ( P = 0.047), with a maximum improvement of 12.89%. An overall quality improvement in quantitative K i images was also observed after correction; however, such improvement was less obvious in K 1 images. Sampled time–activity curves in the cerebral and kidney cortex were less affected by the motion after applying the proposed correction. Mutual information and dice coefficient relative to the reference also demonstrated that SyN-seq improved the alignment between frames over non-corrected images ( P = 0.003 and P = 0.011). Moreover, the proposed correction successfully reduced the inter-subject variability in K i quantifications (11.8% lower in sampled organs). Subjective assessment by experienced radiologists demonstrated consistent results for both SUV images and K i images.

          Conclusion

          To conclude, motion correction is important for image quality in dynamic total-body PET imaging. We demonstrated a correction framework that can effectively reduce the effect of random body movements on dynamic images and their associated quantification. The proposed correction framework can potentially benefit applications that require total-body assessment, such as imaging the brain-gut axis and systemic diseases.

          Related collections

          Most cited references42

          • Record: found
          • Abstract: found
          • Article: not found

          Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain.

          One of the most challenging problems in modern neuroimaging is detailed characterization of neurodegeneration. Quantifying spatial and longitudinal atrophy patterns is an important component of this process. These spatiotemporal signals will aid in discriminating between related diseases, such as frontotemporal dementia (FTD) and Alzheimer's disease (AD), which manifest themselves in the same at-risk population. Here, we develop a novel symmetric image normalization method (SyN) for maximizing the cross-correlation within the space of diffeomorphic maps and provide the Euler-Lagrange equations necessary for this optimization. We then turn to a careful evaluation of our method. Our evaluation uses gold standard, human cortical segmentation to contrast SyN's performance with a related elastic method and with the standard ITK implementation of Thirion's Demons algorithm. The new method compares favorably with both approaches, in particular when the distance between the template brain and the target brain is large. We then report the correlation of volumes gained by algorithmic cortical labelings of FTD and control subjects with those gained by the manual rater. This comparison shows that, of the three methods tested, SyN's volume measurements are the most strongly correlated with volume measurements gained by expert labeling. This study indicates that SyN, with cross-correlation, is a reliable method for normalizing and making anatomical measurements in volumetric MRI of patients and at-risk elderly individuals.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            The Insight ToolKit image registration framework

            Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling. 1
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Performance Evaluation of the uEXPLORER Total-Body PET/CT Scanner Based on NEMA NU 2-2018 with Additional Tests to Characterize PET Scanners with a Long Axial Field of View

                Bookmark

                Author and article information

                Contributors
                tao.sun@siat.ac.cn
                mywang@ha.edu.cn
                Journal
                EJNMMI Phys
                EJNMMI Phys
                EJNMMI Physics
                Springer International Publishing (Cham )
                2197-7364
                14 September 2022
                14 September 2022
                December 2022
                : 9
                : 62
                Affiliations
                [1 ]GRID grid.458489.c, ISNI 0000 0001 0483 7922, Paul C. Lauterbur Research Center for Biomedical Imaging, , Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, ; Shenzhen, People’s Republic of China
                [2 ]GRID grid.207374.5, ISNI 0000 0001 2189 3846, Henan Provincial People’s Hospital and the People’s Hospital of Zhengzhou, , University of Zhengzhou, ; Zhengzhou, People’s Republic of China
                [3 ]Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, People’s Republic of China
                [4 ]GRID grid.440637.2, ISNI 0000 0004 4657 8879, School of Biomedical Engineering, , Shanghai Tech University, ; Shanghai, People’s Republic of China
                [5 ]United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, People’s Republic of China
                Author information
                http://orcid.org/0000-0002-6734-6837
                Article
                493
                10.1186/s40658-022-00493-9
                9474756
                36104468
                3c735227-44f6-4e72-b7d4-7761dec73686
                © The Author(s) 2022

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 12 May 2022
                : 1 September 2022
                Categories
                Original Research
                Custom metadata
                © The Author(s) 2022

                motion correction,total-body pet,dynamic imaging,kinetic modeling

                Comments

                Comment on this article