7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Adversarial Attacks Against Medical Deep Learning Systems

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we argue that the field of medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud, we extend adversarial attacks to three popular medical imaging tasks, and we provide concrete examples of how and why such attacks could be realistically carried out. For each of our representative medical deep learning classifiers, both white and black box attacks were both effective and human-imperceptible. We urge caution in employing deep learning systems in clinical settings, and encourage research into domain-specific defense strategies.

          Related collections

          Most cited references14

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Towards Evaluating the Robustness of Neural Networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

                Bookmark

                Author and article information

                Journal
                14 April 2018
                Article
                1804.05296
                65fdb4a3-7412-4c05-8572-0f0f70744565

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.CR cs.CY cs.LG stat.ML

                Applied computer science,Security & Cryptology,Machine learning,Artificial intelligence

                Comments

                Comment on this article