Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Deep Multi-Task Learning Framework for Brain Tumor Segmentation

      methods-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Glioma is the most common primary central nervous system tumor, accounting for about half of all intracranial primary tumors. As a non-invasive examination method, MRI has an extremely important guiding role in the clinical intervention of tumors. However, manually segmenting brain tumors from MRI requires a lot of time and energy for doctors, which affects the implementation of follow-up diagnosis and treatment plans. With the development of deep learning, medical image segmentation is gradually automated. However, brain tumors are easily confused with strokes and serious imbalances between classes make brain tumor segmentation one of the most difficult tasks in MRI segmentation. In order to solve these problems, we propose a deep multi-task learning framework and integrate a multi-depth fusion module in the framework to accurately segment brain tumors. In this framework, we have added a distance transform decoder based on the V-Net, which can make the segmentation contour generated by the mask decoder more accurate and reduce the generation of rough boundaries. In order to combine the different tasks of the two decoders, we weighted and added their corresponding loss functions, where the distance map prediction regularized the mask prediction. At the same time, the multi-depth fusion module in the encoder can enhance the ability of the network to extract features. The accuracy of the model will be evaluated online using the multispectral MRI records of the BraTS 2018, BraTS 2019, and BraTS 2020 datasets. This method obtains high-quality segmentation results, and the average Dice is as high as 78%. The experimental results show that this model has great potential in segmenting brain tumors automatically and accurately.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: not found
          • Book Chapter: not found

          U-Net: Convolutional Networks for Biomedical Image Segmentation

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

            We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

              In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Oncol
                Front Oncol
                Front. Oncol.
                Frontiers in Oncology
                Frontiers Media S.A.
                2234-943X
                04 June 2021
                2021
                : 11
                : 690244
                Affiliations
                [1] 1 College of Medical Technology, Zhejiang Chinese Medical University , Hangzhou, China
                [2] 2 Cardiovascular Research Centre, Royal Brompton Hospital , London, United Kingdom
                [3] 3 National Heart and Lung Institute, Imperial College London , London, United Kingdom
                [4] 4 College of Life Science, Zhejiang Chinese Medical University , Hangzhou, China
                Author notes

                Edited by: Xujiong Ye, University of Lincoln, United Kingdom

                Reviewed by: Weiping Ding, Nantong University, China; Chengjin Yu, Zhejiang University, China

                *Correspondence: Guang Yang, g.yang@ 123456imperial.ac.uk ; Xiaobo Lai, dmia_lab@ 123456zcmu.edu.cn

                †These authors have contributed equally to this work and share first authorship

                This article was submitted to Cancer Imaging and Image-directed Interventions, a section of the journal Frontiers in Oncology

                Article
                10.3389/fonc.2021.690244
                8212784
                34150660
                03f8095e-bdb1-420a-bcd9-fd4a4a6ad8a8
                Copyright © 2021 Huang, Yang, Zhang, Xu, Yang, Jiang and Lai

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 02 April 2021
                : 17 May 2021
                Page count
                Figures: 8, Tables: 6, Equations: 9, References: 44, Pages: 16, Words: 7577
                Categories
                Oncology
                Methods

                Oncology & Radiotherapy
                automatic segmentation,brain tumor,deep multi-task learning framework,multi-depth fusion module,magnetic resonance imaging

                Comments

                Comment on this article