48
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

          Related collections

          Most cited references40

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

          Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Training Deep Spiking Neural Networks Using Backpropagation

            Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                07 December 2017
                2017
                : 11
                : 682
                Affiliations
                [1] 1Institute of Neuroinformatics, University of Zurich and ETH Zurich , Zurich, Switzerland
                [2] 2Bosch Center for Artificial Intelligence , Renningen, Germany
                Author notes

                Edited by: Gert Cauwenberghs, University of California, San Diego, United States

                Reviewed by: Sadique Sheik, University of California, San Diego, United States; John V. Arthur, IBM, United States Bruno Umbria Pedroni contributed to the review of John V. Arthur

                *Correspondence: Bodo Rueckauer rbodo@ 123456ini.uzh.ch

                This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience

                Article
                10.3389/fnins.2017.00682
                5770641
                29375284
                613de7c5-a0d5-4cc3-ac37-d137d5cb8667
                Copyright © 2017 Rueckauer, Lungu, Hu, Pfeiffer and Liu.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 25 July 2017
                : 22 November 2017
                Page count
                Figures: 5, Tables: 1, Equations: 10, References: 47, Pages: 12, Words: 9952
                Categories
                Neuroscience
                Original Research

                Neurosciences
                artificial neural network,spiking neural network,deep learning,object classification,deep networks,spiking network conversion

                Comments

                Comment on this article