4
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Deep visual social distancing monitoring to combat COVID-19: A comprehensive survey

      review-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Since the start of the COVID-19 pandemic, social distancing (SD) has played an essential role in controlling and slowing down the spread of the virus in smart cities. To ensure the respect of SD in public areas, visual SD monitoring (VSDM) provides promising opportunities by (i) controlling and analyzing the physical distance between pedestrians in real-time, (ii) detecting SD violations among the crowds, and (iii) tracking and reporting individuals violating SD norms. To the authors’ best knowledge, this paper proposes the first comprehensive survey of VSDM frameworks and identifies their challenges and future perspectives. Typically, we review existing contributions by presenting the background of VSDM, describing evaluation metrics, and discussing SD datasets. Then, VSDM techniques are carefully reviewed after dividing them into two main categories: hand-crafted feature-based and deep-learning-based methods. A significant focus is paid to convolutional neural networks (CNN)-based methodologies as most of the frameworks have used either one-stage, two-stage, or multi-stage CNN models. A comparative study is also conducted to identify their pros and cons. Thereafter, a critical analysis is performed to highlight the issues and impediments that hold back the expansion of VSDM systems. Finally, future directions attracting significant research and development are derived.

          Related collections

          Most cited references123

          • Record: found
          • Abstract: not found
          • Article: not found

          ImageNet Large Scale Visual Recognition Challenge

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The Pascal Visual Object Classes (VOC) Challenge

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

              State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Extended tech report
                Bookmark

                Author and article information

                Journal
                Sustain Cities Soc
                Sustain Cities Soc
                Sustainable Cities and Society
                The Author(s). Published by Elsevier Ltd.
                2210-6707
                2210-6715
                21 July 2022
                21 July 2022
                : 104064
                Affiliations
                [a ]Computer Science and Engineering Department, Qatar University, Qatar
                [b ]Electrical Engineering Department, Qatar University, Qatar
                [c ]International Artificial Intelligence Center, Mohammed VI Polytechnic University, Morocco
                Author notes
                [* ]Corresponding author.
                Article
                S2210-6707(22)00382-1 104064
                10.1016/j.scs.2022.104064
                9301907
                d2f352b2-2388-4fb7-b34d-58eddeab65be
                © 2022 The Author(s)

                Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active.

                History
                : 17 April 2022
                : 7 July 2022
                : 12 July 2022
                Categories
                Engineering Advance

                visual social distancing monitoring,pedestrian detection,euclidean distance,bird’s eye view,convolutional neural networks,transfer learning

                Comments

                Comment on this article