8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep RNN-Oriented Paradigm Shift through BOCANet: Broken Obfuscated Circuit Attack

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This is the first work augmenting hardware attacks mounted on obfuscated circuits by incorporating deep recurrent neural network (D-RNN). Logic encryption obfuscation has been used for thwarting counterfeiting, overproduction, and reverse engineering but vulnerable to attacks. There have been efficient schemes, e.g., satisfiability-checking (SAT) based attack, which can potentially compromise hardware obfuscation circuits. Nevertheless, not only there exist countermeasures against such attacks in the state-of-the-art (including the recent delay+logic locking (DLL) scheme in DAC'17), but the sheer amount of time/resources to mount the attack could hinder its efficacy. In this paper, we propose a deep RNN-oriented approach, called BOCANet, to (i) compromise the obfuscated hardware at least an order-of magnitude more efficiently (>20X faster with relatively high success rate) compared to existing attacks; (ii) attack such locked hardware even when the resources to the attacker are only limited to insignificant number of I/O pairs (< 0.5\%) to reconstruct the secret key; and (iii) break a number of experimented benchmarks (ISCAS-85 c423, c1355, c1908, and c7552) successfully.

          Related collections

          Most cited references10

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Speech Recognition with Deep Recurrent Neural Networks

          Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates \emph{deep recurrent neural networks}, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Security analysis of logic obfuscation

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Evaluating the security of logic encryption algorithms

                Bookmark

                Author and article information

                Journal
                08 March 2018
                Article
                1803.03332
                2e87dafb-a559-465d-945e-130f643941c7

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                cs.CR

                Comments

                Comment on this article