2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      STCD-EffV2T Unet: Semi Transfer Learning EfficientNetV2 T-Unet Network for Urban/Land Cover Change Detection Using Sentinel-2 Satellite Images

      , ,
      Remote Sensing
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Change detection in urban areas can be helpful for urban resource management and smart city planning. The effects of human activities on the environment and ground have gained momentum over the past decades, causing remote sensing data sources analysis (such as satellite images) to become an option for swift change detection in the environment and urban areas. We proposed a semi-transfer learning method of EfficientNetV2 T-Unet (EffV2 T-Unet) that combines the effectiveness of composite scaled EfficientNetV2 T as the first path or encoder for feature extraction and convolutional layers of Unet as the second path or decoder for reconstructing the binary change map. In the encoder path, we use EfficientNetV2 T, which was trained by the ImageNet dataset. In this research, we employ two datasets to evaluate the performance of our proposed method for binary change detection. The first dataset is Sentinel-2 satellite images which were captured in 2017 and 2021 in urban areas of northern Iran. The second one is the Onera Satellite Change Detection dataset (OSCD). The performance of the proposed method is compared with YoloX-Unet families, ResNest-Unet families, and other well-known methods. The results demonstrated our proposed method’s effectiveness compared to other methods. The final change map reached an overall accuracy of 97.66%.

          Related collections

          Most cited references55

          • Record: found
          • Abstract: found
          • Article: not found

          DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

          In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Focal Loss for Dense Object Detection

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Object based image analysis for remote sensing

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Remote Sensing
                Remote Sensing
                MDPI AG
                2072-4292
                March 2023
                February 23 2023
                : 15
                : 5
                : 1232
                Article
                10.3390/rs15051232
                36c75bab-2d58-40e5-a82f-40b5a9c9d859
                © 2023

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article