26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Optimal Compensation for Temporal Uncertainty in Movement Planning

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Motor control requires the generation of a precise temporal sequence of control signals sent to the skeletal musculature. We describe an experiment that, for good performance, requires human subjects to plan movements taking into account uncertainty in their movement duration and the increase in that uncertainty with increasing movement duration. We do this by rewarding movements performed within a specified time window, and penalizing slower movements in some conditions and faster movements in others. Our results indicate that subjects compensated for their natural duration-dependent temporal uncertainty as well as an overall increase in temporal uncertainty that was imposed experimentally. Their compensation for temporal uncertainty, both the natural duration-dependent and imposed overall components, was nearly optimal in the sense of maximizing expected gain in the task. The motor system is able to model its temporal uncertainty and compensate for that uncertainty so as to optimize the consequences of movement.

          Author Summary

          Many recent models of motor planning are based on the idea that the CNS plans movements to minimize “costs” intrinsic to motor performance. A minimum variance model would predict that the motor system plans movements that minimize motor error (as measured by the variance in movement) subject to the constraint that the movement be completed within a specified time limit. A complementary model would predict that the motor system minimizes movement time subject to the constraint that movement variance not exceed a certain fixed threshold. But neither of these models is adequate to predict performance in everyday tasks that include external costs imposed by the environment where good performance requires that the motor system select a tradeoff between speed and accuracy. In driving to the airport to catch a plane, for example, there are very real costs associated with driving too fast and also with being just a bit too late. But the “optimal” tradeoff depends on road conditions and also on how important it is to catch the plane. We examine motor performance in analogous experimental tasks where we impose arbitrary monetary costs on movements that are “late” or “early” and show that humans systematically trade off risk and reward so as to maximize their expected monetary gain.

          Related collections

          Most cited references53

          • Record: found
          • Abstract: found
          • Article: not found

          Bayesian integration in sensorimotor learning.

          When we learn a new motor skill, such as playing an approaching tennis ball, both our sensors and the task possess variability. Our sensors provide imperfect information about the ball's velocity, so we can only estimate it. Combining information from multiple modalities can reduce the error in this estimate. On a longer time scale, not all velocities are a priori equally probable, and over the course of a match there will be a probability distribution of velocities. According to bayesian theory, an optimal estimate results from combining information about the distribution of velocities-the prior-with evidence from sensory feedback. As uncertainty increases, when playing in fog or at dusk, the system should increasingly rely on prior knowledge. To use a bayesian strategy, the brain would need to represent the prior distribution and the level of uncertainty in the sensory feedback. Here we control the statistical variations of a new sensorimotor task and manipulate the uncertainty of the sensory feedback. We show that subjects internally represent both the statistical distribution of the task and their sensory uncertainty, combining them in a manner consistent with a performance-optimizing bayesian process. The central nervous system therefore employs probabilistic models during sensorimotor learning.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Adaptive representation of dynamics during learning of a motor task.

            We investigated how the CNS learns to control movements in different dynamical conditions, and how this learned behavior is represented. In particular, we considered the task of making reaching movements in the presence of externally imposed forces from a mechanical environment. This environment was a force field produced by a robot manipulandum, and the subjects made reaching movements while holding the end-effector of this manipulandum. Since the force field significantly changed the dynamics of the task, subjects' initial movements in the force field were grossly distorted compared to their movements in free space. However, with practice, hand trajectories in the force field converged to a path very similar to that observed in free space. This indicated that for reaching movements, there was a kinematic plan independent of dynamical conditions. The recovery of performance within the changed mechanical environment is motor adaptation. In order to investigate the mechanism underlying this adaptation, we considered the response to the sudden removal of the field after a training phase. The resulting trajectories, named aftereffects, were approximately mirror images of those that were observed when the subjects were initially exposed to the field. This suggested that the motor controller was gradually composing a model of the force field, a model that the nervous system used to predict and compensate for the forces imposed by the environment. In order to explore the structure of the model, we investigated whether adaptation to a force field, as presented in a small region, led to aftereffects in other regions of the workspace. We found that indeed there were aftereffects in workspace regions where no exposure to the field had taken place; that is, there was transfer beyond the boundary of the training data. This observation rules out the hypothesis that the subject's model of the force field was constructed as a narrow association between visited states and experienced forces; that is, adaptation was not via composition of a look-up table. In contrast, subjects modeled the force field by a combination of computational elements whose output was broadly tuned across the motor state space. These elements formed a model that extrapolated to outside the training region in a coordinate system similar to that of the joints and muscles rather than end-point forces. This geometric property suggests that the elements of the adaptive process represent dynamics of a motor task in terms of the intrinsic coordinate system of the sensors and actuators.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Orbitofrontal cortex and its contribution to decision-making.

              Damage to orbitofrontal cortex (OFC) produces an unusual pattern of deficits. Patients have intact cognitive abilities but are impaired in making everyday decisions. Here we review anatomical, neuropsychological, and neurophysiological evidence to determine the neuronal mechanisms that might underlie these impairments. We suggest that OFC plays a key role in processing reward: It integrates multiple sources of information regarding the reward outcome to derive a value signal. In effect, OFC calculates how rewarding a reward is. This value signal can then be held in working memory where it can be used by lateral prefrontal cortex to plan and organize behavior toward obtaining the outcome, and by medial prefrontal cortex to evaluate the overall action in terms of its success and the effort that was required. Thus, acting together, these prefrontal areas can ensure that our behavior is most efficiently directed towards satisfying our needs.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS Comput Biol
                plos
                ploscomp
                PLoS Computational Biology
                Public Library of Science (San Francisco, USA )
                1553-734X
                1553-7358
                July 2008
                July 2008
                25 July 2008
                : 4
                : 7
                : e1000130
                Affiliations
                [1]Department of Psychology and Center for Neural Science, New York University, New York, New York, United States of America
                University College London, United Kingdom
                Author notes

                Conceived and designed the experiments: TEH LTM MSL. Performed the experiments: TEH. Analyzed the data: TEH. Wrote the paper: TEH LTM MSL.

                Article
                07-PLCB-RA-0790R3
                10.1371/journal.pcbi.1000130
                2442880
                18654619
                3b0e63d2-f8a0-4e8e-80c5-404521427cbf
                Hudson et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
                History
                : 12 December 2007
                : 17 June 2008
                Page count
                Pages: 9
                Categories
                Research Article
                Computational Biology/Computational Neuroscience
                Neuroscience/Behavioral Neuroscience
                Neuroscience/Motor Systems
                Neuroscience/Sensory Systems
                Neuroscience/Theoretical Neuroscience

                Quantitative & Systems biology
                Quantitative & Systems biology

                Comments

                Comment on this article