13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Countering Malicious DeepFakes: Survey, Battleground, and Horizon

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The creation or manipulation of facial appearance through deep generative approaches, known as DeepFake, have achieved significant progress and promoted a wide range of benign and malicious applications, e.g., visual effect assistance in movie and misinformation generation by faking famous persons. The evil side of this new technique poses another popular study, i.e., DeepFake detection aiming to identify the fake faces from the real ones. With the rapid development of the DeepFake-related studies in the community, both sides (i.e., DeepFake generation and detection) have formed the relationship of battleground, pushing the improvements of each other and inspiring new directions, e.g., the evasion of DeepFake detection. Nevertheless, the overview of such battleground and the new direction is unclear and neglected by recent surveys due to the rapid increase of related publications, limiting the in-depth understanding of the tendency and future works. To fill this gap, in this paper, we provide a comprehensive overview and detailed analysis of the research work on the topic of DeepFake generation, DeepFake detection as well as evasion of DeepFake detection, with more than 318 research papers carefully surveyed. We present the taxonomy of various DeepFake generation methods and the categorization of various DeepFake detection methods, and more importantly, we showcase the battleground between the two parties with detailed interactions between the adversaries (DeepFake generation) and the defenders (DeepFake detection). The battleground allows fresh perspective into the latest landscape of the DeepFake research and can provide valuable analysis towards the research challenges and opportunities as well as research trends and future directions. We also elaborately design interactive diagrams ( http://www.xujuefei.com/dfsurvey) to allow researchers to explore their own interests on popular DeepFake generators or detectors.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: not found
          • Article: not found

          Synthesizing Obama

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Learning Transferable Visual Models From Natural Language Supervision

            State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Deepfakes and beyond: A Survey of face manipulation and fake detection

                Bookmark

                Author and article information

                Contributors
                wangrun@whu.edu.cn
                tsingqguo@ieee.org
                Journal
                Int J Comput Vis
                Int J Comput Vis
                International Journal of Computer Vision
                Springer US (New York )
                0920-5691
                1573-1405
                4 May 2022
                : 1-57
                Affiliations
                [1 ]GRID grid.481557.a, Alibaba Group, ; Sunnyvale, CA USA
                [2 ]GRID grid.49470.3e, ISNI 0000 0001 2331 6153, Key Laboratory of Aerospace Information Security and Trust Computing, School of Cyber Science and Engineering, , Wuhan University, ; Wuhan, China
                [3 ]GRID grid.22069.3f, ISNI 0000 0004 0369 6365, East China Normal University, ; Shanghai, China
                [4 ]GRID grid.33763.32, ISNI 0000 0004 1761 2484, College of Intelligence and Computing, , Tianjin University, ; Tianjin, China
                [5 ]GRID grid.59025.3b, ISNI 0000 0001 2224 0361, Nanyang Technological University, ; Singapore, Singapore
                [6 ]GRID grid.17089.37, ISNI 0000 0001 2190 316X, Alberta Machine Intelligence Institute (AMII), , University of Alberta, ; Edmonton, AB Canada
                [7 ]GRID grid.413273.0, ISNI 0000 0001 0574 8737, Zhejiang Sci-Tech University, ; Hangzhou, China
                Author notes

                Communicated by Jian Sun.

                Author information
                http://orcid.org/0000-0002-0857-8611
                http://orcid.org/0000-0002-2842-5137
                http://orcid.org/0000-0002-5784-770X
                http://orcid.org/0000-0003-0974-9299
                http://orcid.org/0000-0002-8621-2420
                http://orcid.org/0000-0001-7300-9215
                Article
                1606
                10.1007/s11263-022-01606-8
                9066404
                a870dd9a-43e3-4b8f-98c3-5b446e1275f1
                © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022

                This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.

                History
                : 27 February 2021
                : 11 March 2022
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100001381, National Research Foundation Singapore;
                Award ID: NRF2018NCR-NCR005-0001
                Award ID: NRF2018NCR-NSOE003-0001
                Award Recipient :
                Categories
                Article

                deepfake generation,deepfake detection,face,misinformation,disinformation,deepfakes

                Comments

                Comment on this article