2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Robust cortical encoding of 3D tongue shape during feeding in macaques

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Dexterous tongue deformation underlies eating, drinking, and speaking. The orofacial sensorimotor cortex has been implicated in the control of coordinated tongue kinematics, but little is known about how the brain encodes—and ultimately drives—the tongue’s 3D, soft-body deformation. Here we combine a biplanar x-ray video technology, multi-electrode cortical recordings, and machine-learning-based decoding to explore the cortical representation of lingual deformation. We trained long short-term memory (LSTM) neural networks to decode various aspects of intraoral tongue deformation from cortical activity during feeding in male Rhesus monkeys. We show that both lingual movements and complex lingual shapes across a range of feeding behaviors could be decoded with high accuracy, and that the distribution of deformation-related information across cortical regions was consistent with previous studies of the arm and hand.

          Abstract

          Little is known about how the brain encodes—and ultimately drives—the tongue’s 3D deformation. Here, the authors successfully decoded complex tongue deformation from sensorimotor cortex neurons, suggesting a cortical representation of 3D tongue shape.

          Related collections

          Most cited references56

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            3D Slicer as an image computing platform for the Quantitative Imaging Network.

            Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. Copyright © 2012 Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              DeepLabCut: markerless pose estimation of user-defined body parts with deep learning

              Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
                Bookmark

                Author and article information

                Contributors
                jd.laurencechasen@gmail.com
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                24 May 2023
                24 May 2023
                2023
                : 14
                : 2991
                Affiliations
                [1 ]GRID grid.170205.1, ISNI 0000 0004 1936 7822, Department of Organismal Biology and Anatomy, , The University of Chicago, ; 1027 E 57th Street, Chicago, IL 60637 USA
                [2 ]GRID grid.34477.33, ISNI 0000000122986657, Department of Oral Health Sciences, School of Dentistry, , University of Washington, ; 1959 NE Pacific Street, Box #357475, Seattle, WA 98195-7475 USA
                [3 ]GRID grid.34477.33, ISNI 0000000122986657, Graduate Program in Neuroscience, , University of Washington, ; 1959 NE Pacific St., Seattle, WA 98195-7475 USA
                [4 ]GRID grid.170205.1, ISNI 0000 0004 1936 7822, Program in Computational Neuroscience, , The University of Chicago, ; 5812 South Ellis Avenue, Chicago, IL 60637 USA
                Author information
                http://orcid.org/0000-0002-7737-7194
                http://orcid.org/0000-0002-4913-6051
                Article
                38586
                10.1038/s41467-023-38586-3
                10209084
                37225708
                d70b3507-490c-4e73-a2bb-f40bee99a607
                © This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 25 May 2022
                : 8 May 2023
                Funding
                Funded by: FundRef https://doi.org/10.13039/100000072, U.S. Department of Health & Human Services | NIH | National Institute of Dental and Craniofacial Research (NIDCR);
                Award ID: R01DE027236
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/100000049, U.S. Department of Health & Human Services | NIH | National Institute on Aging (U.S. National Institute on Aging);
                Award ID: R01AG069227
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/100000065, U.S. Department of Health & Human Services | NIH | National Institute of Neurological Disorders and Stroke (NINDS);
                Award ID: R01NS111982
                Award Recipient :
                Funded by: FundRef https://doi.org/10.13039/100000001, National Science Foundation (NSF);
                Award ID: GRFP
                Award Recipient :
                Categories
                Article
                Custom metadata
                © Springer Nature Limited 2023

                Uncategorized
                motor cortex,neural decoding
                Uncategorized
                motor cortex, neural decoding

                Comments

                Comment on this article