6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automatic cardiothoracic ratio calculation based on lung fields abstracted from chest X-ray images without heart segmentation

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction

          The cardiothoracic ratio (CTR) based on postero-anterior chest X-rays (P-A CXR) images is one of the most commonly used cardiac measurement methods and an indicator for initially evaluating cardiac diseases. However, the hearts are not readily observable on P-A CXR images compared to the lung fields. Therefore, radiologists often manually determine the CTR’s right and left heart border points of the adjacent left and right lung fields to the heart based on P-A CXR images. Meanwhile, manual CTR measurement based on the P-A CXR image requires experienced radiologists and is time-consuming and laborious.

          Methods

          Based on the above, this article proposes a novel, fully automatic CTR calculation method based on lung fields abstracted from the P-A CXR images using convolutional neural networks (CNNs), overcoming the limitations to heart segmentation and avoiding errors in heart segmentation. First, the lung field mask images are abstracted from the P-A CXR images based on the pre-trained CNNs. Second, a novel localization method of the heart’s right and left border points is proposed based on the two-dimensional projection morphology of the lung field mask images using graphics.

          Results

          The results show that the mean distance errors at the x-axis direction of the CTR’s four key points in the test sets T1 (21 × 512 × 512 static P-A CXR images) and T2 (13 × 512 × 512 dynamic P-A CXR images) based on various pre-trained CNNs are 4.1161 and 3.2116 pixels, respectively. In addition, the mean CTR errors on the test sets T1 and T2 based on four proposed models are 0.0208 and 0.0180, respectively.

          Discussion

          Our proposed model achieves the equivalent performance of CTR calculation as the previous CardioNet model, overcomes heart segmentation, and takes less time. Therefore, our proposed method is practical and feasible and may become an effective tool for initially evaluating cardiac diseases.

          Related collections

          Most cited references32

          • Record: found
          • Abstract: found
          • Article: not found

          SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

          We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem

            Background Automated segmentation of anatomical structures is a crucial step in image analysis. For lung segmentation in computed tomography, a variety of approaches exists, involving sophisticated pipelines trained and validated on different datasets. However, the clinical applicability of these approaches across diseases remains limited. Methods We compared four generic deep learning approaches trained on various datasets and two readily available lung segmentation algorithms. We performed evaluation on routine imaging data with more than six different disease patterns and three published data sets. Results Using different deep learning approaches, mean Dice similarity coefficients (DSCs) on test datasets varied not over 0.02. When trained on a diverse routine dataset (n = 36), a standard approach (U-net) yields a higher DSC (0.97 ± 0.05) compared to training on public datasets such as the Lung Tissue Research Consortium (0.94 ± 0.13, p = 0.024) or Anatomy 3 (0.92 ± 0.15, p = 0.001). Trained on routine data (n = 231) covering multiple diseases, U-net compared to reference methods yields a DSC of 0.98 ± 0.03 versus 0.94 ± 0.12 (p = 0.024). Conclusions The accuracy and reliability of lung segmentation algorithms on demanding cases primarily relies on the diversity of the training data, highlighting the importance of data diversity compared to model choice. Efforts in developing new datasets and providing trained models to the public are critical. By releasing the trained model under General Public License 3.0, we aim to foster research on lung diseases by providing a readily available tool for segmentation of pathological lungs.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A review of medical image data augmentation techniques for deep learning applications

                Bookmark

                Author and article information

                Contributors
                URI : https://loop.frontiersin.org/people/1446610/overviewRole: Role: Role: Role:
                Role: Role: Role: Role:
                Role: Role: Role: Role:
                Role: Role: Role: Role:
                Role: Role: Role: Role:
                URI : https://loop.frontiersin.org/people/1989487/overviewRole: Role: Role: Role: Role: Role: Role: Role: Role:
                URI : https://loop.frontiersin.org/people/1896279/overviewRole: Role: Role:
                Role: Role: Role:
                Role: Role: Role:
                Role: Role: Role: Role: Role: Role: Role:
                Role: Role: Role: Role: Role:
                Journal
                Front Physiol
                Front Physiol
                Front. Physiol.
                Frontiers in Physiology
                Frontiers Media S.A.
                1664-042X
                08 August 2024
                2024
                : 15
                : 1416912
                Affiliations
                [1] 1 Department of Radiological Research and Development , Shenzhen Lanmage Medical Technology Co., Ltd. , Shenzhen, Guangdong, China
                [2] 2 Neusoft Medical System Co., Ltd. , Shenyang, Liaoning, China
                [3] 3 School of Electrical and Information Engineering , Northeast Petroleum University , Daqing, China
                [4] 4 College of Medicine and Biological Information Engineering , Northeastern University , Shenyang, China
                [5] 5 School of Life and Health Management , Shenyang City University , Shenyang, China
                [6] 6 Department of Radiology , The Second Affiliated Hospital of Guangzhou Medical University , Guangzhou China
                [7] 7 College of Health Science and Environmental Engineering , Shenzhen Technology University , Shenzhen China
                [8] 8 School of Applied Technology , Shenzhen University , Shenzhen, China
                [9] 9 Engineering Research Centre of Medical Imaging and Intelligent Analysis , Ministry of Education , Shenyang, China
                Author notes

                Edited by: Domenico L. Gatti, Wayne State University, United States

                Reviewed by: Sharon Ackerman, Wayne State University, United States

                Seeya Awadhut Munj, Wayne State University, United States

                [ † ]

                These authors have contributed equally to this work

                Article
                1416912
                10.3389/fphys.2024.1416912
                11338915
                39175612
                4f1ef5b1-7bfb-4dec-87b3-1e8f0efdef5e
                Copyright © 2024 Yang, Zheng, Guo, Wu, Gao, Guo, Chen, Liu, Ouyang, Chen and Kang.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 13 April 2024
                : 22 July 2024
                Funding
                The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This research was funded by the Zhongnanshan Medical Foundation of Guangdong Province, China (ZNSXS-20230001); the National Natural Science Foundation of China (62071311); and the special program for key fields of colleges and universities in Guangdong Province (biomedicine and health) of China, grant number (2021ZDZX 2008).
                Categories
                Physiology
                Original Research
                Custom metadata
                Computational Physiology and Medicine

                Anatomy & Physiology
                cardiothoracic ratio,chest x-ray images,lung field segmentation,edge detection,convolutional neural network,graphics

                Comments

                Comment on this article