Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Multi-organ segmentation of abdominal structures from non-contrast and contrast enhanced CT images

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Manually delineating upper abdominal organs at risk (OARs) is a time-consuming task. To develop a deep-learning-based tool for accurate and robust auto-segmentation of these OARs, forty pancreatic cancer patients with contrast-enhanced breath-hold computed tomographic (CT) images were selected. We trained a three-dimensional (3D) U-Net ensemble that automatically segments all organ contours concurrently with the self-configuring nnU-Net framework. Our tool’s performance was assessed on a held-out test set of 30 patients quantitatively. Five radiation oncologists from three different institutions assessed the performance of the tool using a 5-point Likert scale on an additional 75 randomly selected test patients. The mean (± std. dev.) Dice similarity coefficient values between the automatic segmentation and the ground truth on contrast-enhanced CT images were 0.80 ± 0.08, 0.89 ± 0.05, 0.90 ± 0.06, 0.92 ± 0.03, 0.96 ± 0.01, 0.97 ± 0.01, 0.96 ± 0.01, and 0.96 ± 0.01 for the duodenum, small bowel, large bowel, stomach, liver, spleen, right kidney, and left kidney, respectively. 89.3% (contrast-enhanced) and 85.3% (non-contrast-enhanced) of duodenum contours were scored as a 3 or above, which required only minor edits. More than 90% of the other organs’ contours were scored as a 3 or above. Our tool achieved a high level of clinical acceptability with a small training dataset and provides accurate contours for treatment planning.

          Related collections

          Most cited references28

          • Record: found
          • Abstract: found
          • Article: not found

          nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation

          Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found
            Is Open Access

            U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation.

              The medical imaging literature has witnessed remarkable progress in high-performing segmentation models based on convolutional neural networks. Despite the new performance highs, the recent advanced segmentation models still require large, representative, and high quality annotated datasets. However, rarely do we have a perfect training dataset, particularly in the field of medical imaging, where data and annotations are both expensive to acquire. Recently, a large body of research has studied the problem of medical image segmentation with imperfect datasets, tackling two major dataset limitations: scarce annotations where only limited annotated data is available for training, and weak annotations where the training data has only sparse annotations, noisy annotations, or image-level annotations. In this article, we provide a detailed review of the solutions above, summarizing both the technical novelties and empirical results. We further compare the benefits and requirements of the surveyed methodologies and provide our recommended solutions. We hope this survey article increases the community awareness of the techniques that are available to handle imperfect medical image segmentation datasets.
                Bookmark

                Author and article information

                Contributors
                cyu4@mdanderson.org
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                9 November 2022
                9 November 2022
                2022
                : 12
                : 19093
                Affiliations
                [1 ]GRID grid.240145.6, ISNI 0000 0001 2291 4776, The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences (GSBS), ; Houston, TX USA
                [2 ]GRID grid.240145.6, ISNI 0000 0001 2291 4776, Department of Radiation Physics, , The University of Texas MD Anderson Cancer Center, ; Houston, TX USA
                [3 ]GRID grid.240145.6, ISNI 0000 0001 2291 4776, Department of Radiation Oncology, , The University of Texas MD Anderson Cancer Center, ; Houston, TX USA
                [4 ]GRID grid.265892.2, ISNI 0000000106344187, Department of Radiation Physics, , The University of Alabama at Birmingham, ; Birmingham, AL USA
                [5 ]GRID grid.420545.2, ISNI 0000 0004 0489 3985, Guy’s and St, Thomas’ NHS Foundation Trust, ; London, UK
                [6 ]GRID grid.19006.3e, ISNI 0000 0000 9632 6718, Department of Radiation Oncology, , University of California Los Angeles, ; Los Angeles, CA USA
                Article
                21206
                10.1038/s41598-022-21206-3
                9646761
                36351987
                71f01c2f-2318-4e6d-bbab-e425e24ba49c
                © The Author(s) 2022

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 18 January 2022
                : 23 September 2022
                Categories
                Article
                Custom metadata
                © The Author(s) 2022

                Uncategorized
                translational research,cancer,computational science
                Uncategorized
                translational research, cancer, computational science

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content43

                Cited by13

                Most referenced authors5,362