5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Building transformers from neurons and astrocytes

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Significance

          Transformers have become the default choice of neural architecture for many machine learning applications. Their success across multiple domains such as language, vision, and speech raises the question: How can one build Transformers using biological computational units? At the same time, in the glial community, there is gradually accumulating evidence that astrocytes, formerly believed to be passive house-keeping cells in the brain, in fact play an important role in the brain’s information processing and computation. In this work we hypothesize that neuron–astrocyte networks can naturally implement the core computation performed by the Transformer block in AI. The omnipresence of astrocytes in almost any brain area may explain the success of Transformers across a diverse set of information domains and computational tasks.

          Abstract

          Glial cells account for between 50% and 90% of all human brain cells, and serve a variety of important developmental, structural, and metabolic functions. Recent experimental efforts suggest that astrocytes, a type of glial cell, are also directly involved in core cognitive processes such as learning and memory. While it is well established that astrocytes and neurons are connected to one another in feedback loops across many timescales and spatial scales, there is a gap in understanding the computational role of neuron–astrocyte interactions. To help bridge this gap, we draw on recent advances in AI and astrocyte imaging technology. In particular, we show that neuron–astrocyte networks can naturally perform the core computation of a Transformer, a particularly successful type of AI architecture. In doing so, we provide a concrete, normative, and experimentally testable account of neuron–astrocyte communication. Because Transformers are so successful across a wide variety of task domains, such as language, vision, and audition, our analysis may help explain the ubiquity, flexibility, and power of the brain’s neuron–astrocyte networks.

          Related collections

          Most cited references62

          • Record: found
          • Abstract: found
          • Article: not found

          Neural networks and physical systems with emergent collective computational abilities.

          J Hopfield (1982)
          Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs.

            Activity-driven modifications in synaptic connections between neurons in the neocortex may occur during development and learning. In dual whole-cell voltage recordings from pyramidal neurons, the coincidence of postsynaptic action potentials (APs) and unitary excitatory postsynaptic potentials (EPSPs) was found to induce changes in EPSPs. Their average amplitudes were differentially up- or down-regulated, depending on the precise timing of postsynaptic APs relative to EPSPs. These observations suggest that APs propagating back into dendrites serve to modify single active synaptic connections, depending on the pattern of electrical activity in the pre- and postsynaptic neurons.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found

              Reactive astrocyte nomenclature, definitions, and future directions

              Reactive astrocytes are astrocytes undergoing morphological, molecular, and functional remodeling in response to injury, disease, or infection of the CNS. Although this remodeling was first described over a century ago, uncertainties and controversies remain regarding the contribution of reactive astrocytes to CNS diseases, repair, and aging. It is also unclear whether fixed categories of reactive astrocytes exist and, if so, how to identify them. We point out the shortcomings of binary divisions of reactive astrocytes into good-vs-bad, neurotoxic-vs-neuroprotective or A1-vs-A2. We advocate, instead, that research on reactive astrocytes include assessment of multiple molecular and functional parameters—preferably in vivo—plus multivariate statistics and determination of impact on pathological hallmarks in relevant models. These guidelines may spur the discovery of astrocyte-based biomarkers as well as astrocyte-targeting therapies that abrogate detrimental actions of reactive astrocytes, potentiate their neuro- and glioprotective actions, and restore or augment their homeostatic, modulatory, and defensive functions.
                Bookmark

                Author and article information

                Contributors
                Journal
                Proc Natl Acad Sci U S A
                Proc Natl Acad Sci U S A
                PNAS
                Proceedings of the National Academy of Sciences of the United States of America
                National Academy of Sciences
                0027-8424
                1091-6490
                14 August 2023
                22 August 2023
                14 August 2023
                : 120
                : 34
                : e2219150120
                Affiliations
                [1] aMassachusetts Institute of Technology-International Business Machines, Watson Artificial Intelligence Laboratory, IBM Research , Cambridge, MA 02142
                [2] bDepartment of Brain and Cognitive Sciences , Massachusetts Institute of Technology , Cambridge, MA 02139
                [3] cDepartment of Neurology, MassGeneral Institute for Neurodegenerative Diseases , Massachusetts General Hospital and Harvard Medical School , Boston, MA 02115
                Author notes
                1To whom correspondence may be addressed. Email: leokoz8@ 123456mit.edu or krotov@ 123456ibm.com .

                Edited by Terrence Sejnowski, Salk Institute for Biological Studies, La Jolla, CA; received November 9, 2022; accepted June 22, 2023

                Author information
                https://orcid.org/0000-0003-4330-1201
                Article
                202219150
                10.1073/pnas.2219150120
                10450673
                37579149
                7d8b2168-1167-448a-9ff0-517f4b8d4e02
                Copyright © 2023 the Author(s). Published by PNAS.

                This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY).

                History
                : 09 November 2022
                : 22 June 2023
                Page count
                Pages: 8, Words: 5154
                Funding
                Funded by: Bright Focus Foundation, FundRef ;
                Award ID: A2020833S
                Award Recipient : Ksenia V. Kastanenka
                Funded by: National Institute of Health, FundRef ;
                Award ID: R01AG066171
                Award Recipient : Ksenia V. Kastanenka
                Categories
                research-article, Research Article
                neuro, Neuroscience
                comp-sci, Computer Sciences
                411
                424
                Biological Sciences
                Neuroscience
                Physical Sciences
                Computer Sciences

                neuroscience,astrocytes,transformers,glia,artificial intelligence

                Comments

                Comment on this article