5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      PULP-NN: accelerating quantized neural networks on parallel ultra-low-power RISC-V processors

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We present PULP-NN, an optimized computing library for a parallel ultra-low-power tightly coupled cluster of RISC-V processors. The key innovation in PULP-NN is a set of kernels for quantized neural network inference, targeting byte and sub-byte data types, down to INT-1, tuned for the recent trend toward aggressive quantization in deep neural network inference. The proposed library exploits both the digital signal processing extensions available in the PULP RISC-V processors and the cluster’s parallelism, achieving up to 15.5 MACs/cycle on INT-8 and improving performance by up to 63 × with respect to a sequential implementation on a single RISC-V core implementing the baseline RV32IMC ISA. Using PULP-NN, a CIFAR-10 network on an octa-core cluster runs in 30 × and 19.6 × less clock cycles than the current state-of-the-art ARM CMSIS-NN library, running on STM32L4 and STM32H7 MCUs, respectively. The proposed library, when running on a GAP-8 processor, outperforms by 36.8 × and by 7.45 × the execution on energy efficient MCUs such as STM32L4 and high-end MCUs such as STM32H7 respectively, when operating at the maximum frequency. The energy efficiency on GAP-8 is 14.1 × higher than STM32L4 and 39.5 × higher than STM32H7, at the maximum efficiency operating point.

          This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.

          Related collections

          Most cited references1

          • Record: found
          • Abstract: not found
          • Article: not found

          Quantized Neural Network. Training neural networks with low precision weights and activations

            Bookmark

            Author and article information

            Contributors
            Journal
            Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
            Phil. Trans. R. Soc. A.
            The Royal Society
            1364-503X
            1471-2962
            February 07 2020
            December 23 2019
            February 07 2020
            : 378
            : 2164
            : 20190155
            Affiliations
            [1 ]Department of Electrical, Electronic and Information Engineering (DEI), University of Bologna Bologna, Italy
            [2 ]Integrated systems Laboratory (IIS), ETH Zurich Zurich, Switzerland
            Article
            10.1098/rsta.2019.0155
            6939244
            31865877
            ae53bcf5-48ec-4671-9eff-17a13c32a85d
            © 2020

            https://royalsociety.org/journals/ethics-policies/data-sharing-mining/

            History

            Comments

            Comment on this article