25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Using Outliers in Freesurfer Segmentation Statistics to Identify Cortical Reconstruction Errors in Structural Scans

      Preprint
      , , ,
      bioRxiv

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction: Quality assurance (QA) is vital for ensuring the integrity of processed neuroimaging data for use in clinical neurosciences research. Manual QA (visual inspection) of processed brains for cortical surface reconstruction errors is resource-intensive, particularly with large datasets. Several semi-automated QA tools use quantitative detection of subjects for editing based on outlier brain regions. There were two project goals: (1) evaluate the adequacy of a statistical QA method relative to visual inspection, and (2) examine whether error identification and correction significantly impacts estimation of cortical parameters and established brain-behavior relationships. Methods: T1 MPRAGE images (N = 530) of healthy adults were obtained from the NKI-Rockland Sample and reconstructed using Freesurfer 5.3. Visual inspection of T1 images was conducted for: (1) participants (n = 110) with outlier values (z scores +/- 3 SD) for subcortical and cortical segmentation volumes (outlier group), and (2) a random sample of remaining participants (n = 110) with segmentation values that did not meet the outlier criterion (non-outlier group). Results: The outlier group had 21% more participants with visual inspection-identified errors than participants in the non-outlier group, with a medium effect size (Φ = 0.22). Nevertheless, a considerable portion of images with errors of cortical extension were found in the non-outlier group (41%). Sex significantly predicted error rate; men were 2.8 times more likely to have errors than women. Although nine brain regions significantly changed size from pre- to post-editing (with effect sizes ranging from 0.26 to 0.59), editing did not substantially change the correlations of neurocognitive tasks and brain volumes (ps > 0.05). Conclusions: Statistically-based QA, although less resource intensive, is not accurate enough to supplant visual inspection. We discuss practical implications of our findings to guide resource allocation decisions for image processing.

          Related collections

          Author and article information

          Journal
          bioRxiv
          August 21 2017
          Article
          10.1101/176818
          fe016b8c-770a-44dc-aebe-17ff89b946ef
          © 2017
          History

          Molecular medicine,Neurosciences
          Molecular medicine, Neurosciences

          Comments

          Comment on this article