1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Certified Neural Network Watermarks with Randomized Smoothing

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio. Recently, watermarking methods have been extended to deep learning models -- in principle, the watermark should be preserved when an adversary tries to copy the model. However, in practice, watermarks can often be removed by an intelligent adversary. Several papers have proposed watermarking methods that claim to be empirically resistant to different types of removal attacks, but these new techniques often fail in the face of new or better-tuned adversaries. In this paper, we propose a certifiable watermarking method. Using the randomized smoothing technique proposed in Chiang et al., we show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold. In addition to being certifiable, our watermark is also empirically more robust compared to previous watermarking methods. Our experiments can be reproduced with code at https://github.com/arpitbansal297/Certified_Watermarks

          Related collections

          Author and article information

          Journal
          16 July 2022
          Article
          2207.07972
          0cc010d8-d791-4176-935f-f4a61dff656d

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          ICML 2022
          ICML 2022
          cs.LG cs.CR

          Security & Cryptology,Artificial intelligence
          Security & Cryptology, Artificial intelligence

          Comments

          Comment on this article