13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Shape Constrained Fully Convolutional DenseNet with Adversarial Training for Multi-organ Segmentation on Head and Neck CT and Low Field MR Images

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose:

          Image guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low field MRI.

          Methods and material:

          A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A CNN based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the Densenet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation.

          Results:

          The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low field MR dataset, the following average Dice’s indices were obtained using improved SC-GAN: 0.916 (Brainstem), 0.589 (Optical chiasm), 0.816 (Mandible), 0.703 (Optical nerves), 0.799 (Larynx), 0.706 (Pharynx), and 0.845 (Parotid glands). The average surface distances ranged from 0.68mm (Brainstem) to 1.70mm (Larynx). The 95% surface distance ranged from 1.48mm (Left optical nerve) to 3.92mm (Larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU.

          Conclusion:

          The performance of our previous shape constrained fully convolutional neural networks for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low field MR images acquired on a MR guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.

          Related collections

          Author and article information

          Journal
          0425746
          5648
          Med Phys
          Med Phys
          Medical physics
          0094-2405
          2473-4209
          21 May 2019
          06 May 2019
          June 2019
          01 June 2020
          : 46
          : 6
          : 2669-2682
          Affiliations
          [1 ]Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi’an, Shaanxi, 710071, China
          [2 ]Department of Radiation Oncology, University of California—Los Angeles, Los Angeles, CA 90095, USA
          Author notes
          Corresponding Author:Ke Sheng, Ph.D., Department of Radiation Oncology, University of California, Los Angeles, ksheng@ 123456mednet.ucla.edu
          Article
          PMC6581189 PMC6581189 6581189 nihpa1024820
          10.1002/mp.13553
          6581189
          31002188
          7cbf01b7-04ed-4df0-98bd-05926bf4010d
          History
          Categories
          Article

          Shape Representation Loss,Head and Neck images,Generative Adversarial Network,Fully Convolutional DenseNet

          Comments

          Comment on this article