Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
11
views
0
recommends
+1 Recommend
3 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Measuring Human and Economic Activity From Satellite Imagery to Support City-Scale Decision-Making During COVID-19 Pandemic

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The COVID-19 outbreak forced governments worldwide to impose lockdowns and quarantines to prevent virus transmission. As a consequence, there are disruptions in human and economic activities all over the globe. The recovery process is also expected to be rough. Economic activities impact social behaviors, which leave signatures in satellite images that can be automatically detected and classified. Satellite imagery can support the decision-making of analysts and policymakers by providing a different kind of visibility into the unfolding economic changes. In this article, we use a deep learning approach that combines strategic location sampling and an ensemble of lightweight convolutional neural networks (CNNs) to recognize specific elements in satellite images that could be used to compute economic indicators based on it, automatically. This CNN ensemble framework ranked third place in the US Department of Defense xView challenge, the most advanced benchmark for object detection in satellite images. We show the potential of our framework for temporal analysis using the US IARPA Function Map of the World (fMoW) dataset. We also show results on real examples of different sites before and after the COVID-19 outbreak to illustrate different measurable indicators. Our code and annotated high-resolution aerial scenes before and after the outbreak are available on GitHub. 1

          Related collections

          Most cited references45

          • Record: found
          • Abstract: found
          • Article: not found

          Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position

          A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by "learning without a teacher", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname "neocognitron". After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consists of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of "S-cells", which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of "C-cells" similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any "teacher" during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cells of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Deep learning

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Focal Loss for Dense Object Detection

              The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron.
                Bookmark

                Author and article information

                Contributors
                Journal
                IEEE Trans Big Data
                IEEE Trans Big Data
                0068800
                TBDATA
                ITBDAX
                Ieee Transactions on Big Data
                IEEE
                2332-7790
                01 March 2021
                21 October 2020
                : 7
                : 1
                : 56-68
                Affiliations
                [1] institutionUniversidade Tecnológica Federal do Paraná (UTFPR), institutionringgold 74354; Curitiba 80230-901 Brazil
                [2] divisionDepartment of Computer Science and Engineering, institutionUniversity of South Florida, institutionringgold 7831; Tampa FL 33620 USA
                Article
                10.1109/TBDATA.2020.3032839
                8769025
                37981992
                74df085f-82e8-461d-8487-6ff00f7a6870
                This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

                This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

                History
                : 14 July 2020
                : 12 October 2020
                : 19 October 2020
                : 01 March 2021
                Page count
                Figures: 10, Tables: 2, References: 46, Pages: 13
                Funding
                Funded by: institutionNational Science Foundation, fundref 10.13039/100000001;
                Award ID: CNS-1513126
                Funded by: institutionUniversity of South Florida for the Institute for Artificial Intelligence;
                Part of the equipment used in this project are supported by a Grant (CNS-1513126) from the USA National Science Foundation. Funding from the University of South Florida for the Institute for Artificial Intelligence (AI+X) is also acknowledged.
                Categories
                Article

                remote sensing,cnn-based object detection,human and economic activity assessment,covid-19 pandemic

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content257

                Cited by14

                Most referenced authors496