2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Long short-term memory and learning-to-learn in networks of spiking neurons

      journal-article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain. But computing and learning capabilities of RSNN models have remained poor, at least in comparison with artificial neural networks (ANNs). We address two possible reasons for that. One is that RSNNs in the brain are not randomly connected or designed according to simple rules, and they do not start learning as a tabula rasa network. Rather, RSNNs in the brain were optimized for their tasks through evolution, development, and prior experience. Details of these optimization processes are largely unknown. But their functional contribution can be approximated through powerful optimization methods, such as backpropagation through time (BPTT). A second major mismatch between RSNNs in the brain and models is that the latter only show a small fraction of the dynamics of neurons and synapses in the brain. We include neurons in our RSNN model that reproduce one prominent dynamical process of biological neurons that takes place at the behaviourally relevant time scale of seconds: neuronal adaptation. We denote these networks as LSNNs because of their Long short-term memory. The inclusion of adapting neurons drastically increases the computing and learning capability of RSNNs if they are trained and configured by deep learning (BPTT combined with a rewiring algorithm that optimizes the network architecture). In fact, the computational performance of these RSNNs approaches for the first time that of LSTM networks. In addition RSNNs with adapting neurons can acquire abstract knowledge from prior learning in a Learning-to-Learn (L2L) scheme, and transfer that knowledge in order to learn new but related tasks from very few examples. We demonstrate this for supervised learning and reinforcement learning.

          Abstract

          First three authors contributed equally; Paper accepted at NIPS 2018

          Related collections

          Author and article information

          Journal
          arXiv
          2018
          26 March 2018
          28 March 2018
          19 May 2018
          22 May 2018
          02 November 2018
          05 November 2018
          25 December 2018
          27 December 2018
          March 2018
          Article
          10.48550/ARXIV.1803.09574
          92540cbf-815c-4735-b698-62fbd1b837b6

          arXiv.org perpetual, non-exclusive license

          History

          Neural and Evolutionary Computing (cs.NE),FOS: Biological sciences,FOS: Computer and information sciences,Neurons and Cognition (q-bio.NC)

          Comments

          Comment on this article