0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware. Still, many applications train SNNs offline, and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible. Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible. In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers' feedforward connectivity. These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time. We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained a priori independently of the dataset being used with the primary network. We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit. We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance. This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: not found
          • Article: not found

          Loihi: A Neuromorphic Manycore Processor with On-Chip Learning

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Generating coherent patterns of activity from chaotic neural networks.

            Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Backpropagation and the brain

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Comput Neurosci
                Front Comput Neurosci
                Front. Comput. Neurosci.
                Frontiers in Computational Neuroscience
                Frontiers Media S.A.
                1662-5188
                16 May 2024
                2024
                : 18
                : 1240348
                Affiliations
                [1] 1Cold Spring Harbor Laboratory , Long Island, NY, United States
                [2] 2Electrical and Computer Engineering Department, University of Wisconsin-Madison , Madison, WI, United States
                Author notes

                Edited by: Lei Deng, Tsinghua University, China

                Reviewed by: Shuangming Yang, Tianjin University, China

                Desmond Loke, Singapore University of Technology and Design, Singapore

                Zhaodong Chen, Nvidia, United States

                *Correspondence: Kyle Daruwalla daruwal@ 123456cshl.edu
                Article
                10.3389/fncom.2024.1240348
                11137249
                38818385
                88b3b877-e38e-46e3-ba35-72a622a24a41
                Copyright © 2024 Daruwalla and Lipasti.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 14 June 2023
                : 26 April 2024
                Page count
                Figures: 6, Tables: 4, Equations: 14, References: 35, Pages: 11, Words: 6986
                Funding
                Funded by: Air Force Research Laboratory, doi 10.13039/100006602;
                Award ID: FA9550-18-1-0166
                Funded by: National Science Foundation, doi 10.13039/100000001;
                Award ID: CCF-1813434
                This work was funded by the US Air Force Research Laboratory and the National Science Foundation.
                Categories
                Neuroscience
                Original Research

                Neurosciences
                neuromorphic computing,neural network,learning rule,information bottleneck,back-propagation

                Comments

                Comment on this article