1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Fast, Flexible Closed-Loop Feedback: Tracking Movement in “Real-Millisecond-Time”

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          One of the principal functions of the brain is to control movement and rapidly adapt behavior to a changing external environment. Over the last decades our ability to monitor activity in the brain, manipulate it while also manipulating the environment the animal moves through, has been tackled with increasing sophistication. However, our ability to track the movement of the animal in real time has not kept pace. Here, we use a dynamic vision sensor (DVS) based event-driven neuromorphic camera system to implement real-time, low-latency tracking of a single whisker that mice can move at ∼25 Hz. The customized DVS system described here converts whisker motion into a series of events that can be used to estimate the position of the whisker and to trigger a position-based output interactively within 2 ms. This neuromorphic chip-based closed-loop system provides feedback rapidly and flexibly. With this system, it becomes possible to use the movement of whiskers or in principal, movement of any part of the body to reward, punish, in a rapidly reconfigurable way. These methods can be used to manipulate behavior, and the neural circuits that help animals adapt to changing values of a sequence of motor actions.

          Related collections

          Most cited references41

          • Record: found
          • Abstract: found
          • Article: not found

          DeepLabCut: markerless pose estimation of user-defined body parts with deep learning

          Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Imaging large-scale neural activity with cellular resolution in awake, mobile mice.

            We report a technique for two-photon fluorescence imaging with cellular resolution in awake, behaving mice with minimal motion artifact. The apparatus combines an upright, table-mounted two-photon microscope with a spherical treadmill consisting of a large, air-supported Styrofoam ball. Mice, with implanted cranial windows, are head restrained under the objective while their limbs rest on the ball's upper surface. Following adaptation to head restraint, mice maneuver on the spherical treadmill as their heads remain motionless. Image sequences demonstrate that running-associated brain motion is limited to approximately 2-5 microm. In addition, motion is predominantly in the focal plane, with little out-of-plane motion, making the application of a custom-designed Hidden-Markov-Model-based motion correction algorithm useful for postprocessing. Behaviorally correlated calcium transients from large neuronal and astrocytic populations were routinely measured, with an estimated motion-induced false positive error rate of <5%.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor

                Bookmark

                Author and article information

                Journal
                eNeuro
                eNeuro
                eneuro
                eneuro
                eNeuro
                eNeuro
                Society for Neuroscience
                2373-2822
                14 October 2019
                30 October 2019
                Nov-Dec 2019
                : 6
                : 6
                : ENEURO.0147-19.2019
                Affiliations
                [1 ]Institute of Biology, Humboldt University of Berlin , D-10117 Berlin, Germany
                [2 ]Eridian Systems , D-10179 Berlin, Germany
                [3 ]Department of Computer Science, University of Sheffield , Sheffield, S10 2TP United Kingdom
                [4 ]Bristol Robotics Laboratory, University of Bristol and University of the West of England , Bristol, BS16 1QY United Kingdom
                Author notes

                The authors declare no competing financial interests.

                Author contributions: K.S., V.B., B.M., M.J.P., M.E.L., and R.N.S.S. designed research; K.S. performed research; K.S., V.B., B.M., and M.J.P. contributed unpublished reagents/analytic tools; K.S. analyzed data; K.S. and R.N.S.S. wrote the paper.

                This work was supported by Deutsche Forschungsgemeinschaft Grants 2112280105 and LA 3442/3-1 and LA 3442/5-1 (to M.E.L.) and Project Number 327654276–SFB 1315; the European Union's Horizon 2020 Research and Innovation Program and Euratom Research and Training Program 2014–2018 Grant 670118 (to M.E.L.); the Human Brain Project EU Grant 720270, HBP SGA1 and SGA2, “Context-Sensitive Multisensory Object Recognition: A Deep Network Model Constrained by Multi-Level, Multi-Species Data” (to M.E.L.); and the Einstein Stiftung.

                Correspondence should be addressed to Robert N. S. Sachdev at robert.sachdev@ 123456charite.de or Matthew E. Larkum at matthew.larkum@ 123456gmail.com
                Author information
                https://orcid.org/0000-0003-4368-8143
                https://orcid.org/0000-0002-8642-4845
                https://orcid.org/0000-0001-9799-2656
                https://orcid.org/0000-0002-6627-0199
                Article
                eN-MNT-0147-19
                10.1523/ENEURO.0147-19.2019
                6825957
                31611334
                4e2ddd39-8029-4d0e-8e6e-903df76526d6
                Copyright © 2019 Sehara et al.

                This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

                History
                : 17 April 2019
                : 12 September 2019
                : 16 September 2019
                Page count
                Figures: 7, Tables: 0, Equations: 8, References: 52, Pages: 18, Words: 13627
                Funding
                Funded by: http://doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft (DFG)
                Award ID: 2112280105
                Award ID: LA 3442/3-1
                Award ID: LA 3442/5-1
                Funded by: European Union's Horizon 2020
                Funded by: Human Brain Project
                Categories
                7
                7.2
                Methods/New Tools
                Novel Tools and Methods
                Custom metadata
                November/December 2019

                feedback,kinematics,motor,neuro-morphic,somatosensory,virtual reality

                Comments

                Comment on this article