2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Self-attention CNN for retinal layer segmentation in OCT

      research-article
      * , , , ,
      Biomedical Optics Express
      Optica Publishing Group

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The structure of the retinal layers provides valuable diagnostic information for many ophthalmic diseases. Optical coherence tomography (OCT) obtains cross-sectional images of the retina, which reveals information about the retinal layers. The U-net based approaches are prominent in retinal layering methods, which are usually beneficial to local characteristics but not good at obtaining long-distance dependence for contextual information. Furthermore, the morphology of retinal layers with the disease is more complex, which brings more significant challenges to the task of retinal layer segmentation. We propose a U-shaped network combining an encoder-decoder architecture and self-attention mechanisms. In response to the characteristics of retinal OCT cross-sectional images, a self-attentive module in the vertical direction is added to the bottom of the U-shaped network, and an attention mechanism is also added in skip connection and up-sampling to enhance essential features. In this method, the transformer’s self-attentive mechanism obtains the global field of perception, thus providing the missing context information for convolutions, and the convolutional neural network also efficiently extracts local features, compensating the local details the transformer ignores. The experiment results showed that our method is accurate and better than other methods for segmentation of the retinal layers, with the average Dice scores of 0.871 and 0.820, respectively, on two public retinal OCT image datasets. To perform the layer segmentation of retinal OCT image better, the proposed method incorporates the transformer’s self-attention mechanism in a U-shaped network, which is helpful for ophthalmic disease diagnosis.

          Related collections

          Most cited references26

          • Record: found
          • Abstract: not found
          • Book Chapter: not found

          U-Net: Convolutional Networks for Biomedical Image Segmentation

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Fully convolutional networks for semantic segmentation

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Focal loss for dense object detection

              The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron.
                Bookmark

                Author and article information

                Journal
                Biomed Opt Express
                Biomed Opt Express
                BOE
                Biomedical Optics Express
                Optica Publishing Group
                2156-7085
                13 February 2024
                01 March 2024
                : 15
                : 3
                : 1605-1617
                Affiliations
                [1]Shanghai Institute of Technology , Shanghai 201418, China
                Author notes
                Author information
                https://orcid.org/0000-0001-5991-1210
                Article
                510464
                10.1364/BOE.510464
                10942697
                38495698
                3c72f036-d101-4df0-b8d2-275f8c4524f5
                © 2024 Optica Publishing Group

                https://doi.org/10.1364/OA_License_v2#VOR-OA

                © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

                History
                : 30 October 2023
                : 13 January 2024
                : 30 January 2024
                Funding
                Funded by: Science and Technology Commission of Shanghai Municipality 10.13039/501100003399
                Award ID: 19441905800
                Funded by: National Natural Science Foundation of China 10.13039/501100001809
                Award ID: 61675134
                Award ID: 62175156
                Award ID: 81827807
                Funded by: Shanghai Institute of Technology 10.13039/501100008875
                Award ID: XTCX2022-04
                Categories
                Article

                Vision sciences
                Vision sciences

                Comments

                Comment on this article