2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Catalyzing next-generation Artificial Intelligence through NeuroAI

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities – inherited from over 500 million years of evolution – that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.

          Abstract

          One of the ambitions of computational neuroscience is that we will continue to make improvements in the field of artificial intelligence that will be informed by advances in our understanding of how the brains of various species evolved to process information. To that end, here the authors propose an expanded version of the Turing test that involves embodied sensorimotor interactions with the world as a new framework for accelerating progress in artificial intelligence.

          Related collections

          Most cited references53

          • Record: found
          • Abstract: found
          • Article: not found

          Human-level control through deep reinforcement learning.

          The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            DeepLabCut: markerless pose estimation of user-defined body parts with deep learning

            Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Mastering the game of Go with deep neural networks and tree search.

              The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
                Bookmark

                Author and article information

                Contributors
                zador@cshl.edu
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                22 March 2023
                22 March 2023
                2023
                : 14
                : 1597
                Affiliations
                [1 ]GRID grid.225279.9, ISNI 0000 0004 0387 3667, Cold Spring Harbor Laboratory, ; Cold Spring Harbor, NY 11724 USA
                [2 ]GRID grid.21729.3f, ISNI 0000000419368729, Department of Psychiatry, , Columbia University, ; New York, NY 10027 USA
                [3 ]GRID grid.510486.e, Mila, ; Montréal, QC H2S 3H1 Canada
                [4 ]GRID grid.14709.3b, ISNI 0000 0004 1936 8649, School of Computer Science, , McGill University, ; Montreal, Canada
                [5 ]GRID grid.14709.3b, ISNI 0000 0004 1936 8649, Montreal Neurological Institute, , McGill University, ; Montreal, Canada
                [6 ]GRID grid.14709.3b, ISNI 0000 0004 1936 8649, Department of Neurology & Neurosurgery, , McGill University, ; Montreal, Canada
                [7 ]GRID grid.440050.5, ISNI 0000 0004 0408 2525, Learning in Machines and Brains Program, , CIFAR, ; Toronto, Canada
                [8 ]GRID grid.38142.3c, ISNI 000000041936754X, Department of Organismic and Evolutionary Biology, , Harvard University, ; Cambridge, MA 02138 USA
                [9 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Bioengineering, , Stanford University, ; Stanford, CA 94305 USA
                [10 ]Google Deepmind, London, N1C 4AG UK
                [11 ]GRID grid.430264.7, ISNI 0000 0004 4648 6763, Flatiron Institute, , Simons Foundation, ; New York, NY 10010 USA
                [12 ]GRID grid.19006.3e, ISNI 0000 0000 9632 6718, Department of Neurobiology, , University of California Los Angeles, ; Los Angeles, CA 90095 USA
                [13 ]GRID grid.7445.2, ISNI 0000 0001 2113 8111, Department of Bioengineering, , Imperial College London, ; London, SW7 2BW UK
                [14 ]GRID grid.116068.8, ISNI 0000 0001 2341 2786, Department of Brain and Cognitive Sciences, , MIT, ; Cambridge, MA 02139 USA
                [15 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Applied Physics, , Stanford University, ; Stanford, CA 94305 USA
                [16 ]Numenta, Redwood City, CA 94063 USA
                [17 ]GRID grid.25879.31, ISNI 0000 0004 1936 8972, Department of Neuroscience, , University of Pennsylvania, ; Philadelphia, PA 19104 USA
                [18 ]Meta, Menlo Park, CA 94025 USA
                [19 ]GRID grid.416169.d, ISNI 0000 0004 0381 3461, Department of Electrical and Computer Engineering, , NYU, ; Brooklyn, NY 11201 USA
                [20 ]GRID grid.116068.8, ISNI 0000 0001 2341 2786, Media Lab, , MIT, ; Cambridge, MA 02140 USA
                [21 ]GRID grid.47840.3f, ISNI 0000 0001 2181 7878, Helen Wills Neuroscience Institute, , University of California Berkeley, ; Berkeley, CA 94720 USA
                [22 ]GRID grid.8591.5, ISNI 0000 0001 2322 4988, Department of Basic Neurosciences, , University of Geneva, ; Genève, 1211 Switzerland
                [23 ]GRID grid.137628.9, ISNI 0000 0004 1936 8753, Center for Neural Science, , NYU, ; New York, NY 10003 USA
                [24 ]GRID grid.250671.7, ISNI 0000 0001 0662 7144, Salk Institute for Biological Studies, ; La Jolla, CA 92037 USA
                [25 ]GRID grid.137628.9, ISNI 0000 0004 1936 8753, Departments of Neural Science, Mathematics, and Psychology, , NYU, ; New York, NY 10003 USA
                [26 ]GRID grid.16753.36, ISNI 0000 0001 2299 3507, Department of Physiology, , Northwestern University, ; Chicago, IL 60611 USA
                [27 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Electrical Engineering, , Stanford University, ; Stanford, CA 94305 USA
                [28 ]GRID grid.39382.33, ISNI 0000 0001 2160 926X, Department of Neuroscience, , Baylor College of Medicine, ; Houston, TX 77030 USA
                Author information
                http://orcid.org/0000-0002-8431-9136
                http://orcid.org/0000-0003-0645-1964
                http://orcid.org/0000-0002-9322-3515
                http://orcid.org/0000-0001-7758-6896
                http://orcid.org/0000-0002-3205-3794
                http://orcid.org/0000-0003-4507-8648
                http://orcid.org/0000-0002-1592-5896
                http://orcid.org/0000-0002-3414-8244
                http://orcid.org/0000-0002-0622-7391
                http://orcid.org/0000-0002-1206-527X
                http://orcid.org/0000-0001-7696-447X
                http://orcid.org/0000-0002-4305-6376
                Article
                37180
                10.1038/s41467-023-37180-x
                10033876
                36949048
                f51e3f68-ba87-48b2-bffa-74d659105f79
                © The Author(s) 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 11 September 2022
                : 3 March 2023
                Categories
                Perspective
                Custom metadata
                © The Author(s) 2023

                Uncategorized
                neuroscience,computer science
                Uncategorized
                neuroscience, computer science

                Comments

                Comment on this article