25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Where Are You Throwing the Ball? I Better Watch Your Body, Not Just Your Arm!

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The ability to intercept or avoid a moving object, whether to catch a ball, snatch one’s prey, or avoid the path of a predator, is a skill that has been acquired throughout evolution by many species in the animal kingdom. This requires processing early visual cues in order to program anticipatory motor responses tuned to the forthcoming event. Here, we explore the nature of the early kinematics cues that could inform an observer about the future direction of a ball projected with an unconstrained overarm throw. Our goal was to pinpoint the body segments that, throughout the temporal course of the throwing action, could provide key cues for accurately predicting the side of the outgoing ball. We recorded whole-body kinematics from twenty non-expert participants performing unconstrained overarm throws at four different targets placed on a vertical plane at 6 m distance. In order to characterize the spatiotemporal structure of the information embedded in the kinematics of the throwing action about the outgoing ball direction, we introduced a novel combination of dimensionality reduction and machine learning techniques. The recorded kinematics clearly shows that throwing styles differed considerably across individuals, with corresponding inter-individual differences in the spatio-temporal structure of the thrower predictability. We found that for most participants it is possible to predict the region where the ball hit the target plane, with an accuracy above 80%, as early as 400–500 ms before ball release. Interestingly, the body parts that provided the most informative cues about the action outcome varied with the throwing style and during the time course of the throwing action. Not surprisingly, at the very end of the action, the throwing arm is the most informative body segment. However, cues allowing for predictions to be made earlier than 200 ms before release are typically associated to other body parts, such as the lower limbs and the contralateral arm. These findings are discussed in the context of the sport-science literature on throwing and catching interactive tasks, as well as from the wider perspective of the role of sensorimotor coupling in interpersonal social interactions.

          Related collections

          Most cited references67

          • Record: found
          • Abstract: found
          • Article: not found

          Action plans used in action observation.

          How do we understand the actions of others? According to the direct matching hypothesis, action understanding results from a mechanism that maps an observed action onto motor representations of that action. Although supported by neurophysiological and brain-imaging studies, direct evidence for this hypothesis is sparse. In visually guided actions, task-specific proactive eye movements are crucial for planning and control. Because the eyes are free to move when observing such actions, the direct matching hypothesis predicts that subjects should produce eye movements similar to those produced when they perform the tasks. If an observer analyses action through purely visual means, however, eye movements will be linked reactively to the observed action. Here we show that when subjects observe a block stacking task, the coordination between their gaze and the actor's hand is predictive, rather than reactive, and is highly similar to the gaze-hand coordination when they perform the task themselves. These results indicate that during action observation subjects implement eye motor programs directed by motor representations of manual actions and thus provide strong evidence for the direct matching hypothesis.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Prediction in joint action: what, when, and where.

            Drawing on recent findings in the cognitive and neurosciences, this article discusses how people manage to predict each other's actions, which is fundamental for joint action. We explore how a common coding of perceived and performed actions may allow actors to predict the what, when, and where of others' actions. The "what" aspect refers to predictions about the kind of action the other will perform and to the intention that drives the action. The "when" aspect is critical for all joint actions requiring close temporal coordination. The "where" aspect is important for the online coordination of actions because actors need to effectively distribute a common space. We argue that although common coding of perceived and performed actions alone is not sufficient to enable one to engage in joint action, it provides a representational platform for integrating the actions of self and other. The final part of the paper considers links between lower-level processes like action simulation and higher-level processes like verbal communication and mental state attribution that have previously been at the focus of joint action research. Copyright © 2009 Cognitive Science Society, Inc.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Action anticipation and motor resonance in elite basketball players

              We combined psychophysical and transcranial magnetic stimulation studies to investigate the dynamics of action anticipation and its underlying neural correlates in professional basketball players. Athletes predicted the success of free shots at a basket earlier and more accurately than did individuals with comparable visual experience (coaches or sports journalists) and novices. Moreover, performance between athletes and the other groups differed before the ball was seen to leave the model's hands, suggesting that athletes predicted the basket shot's fate by reading the body kinematics. Both visuo-motor and visual experts showed a selective increase of motor-evoked potentials during observation of basket shots. However, only athletes showed a time-specific motor activation during observation of erroneous basket throws. Results suggest that achieving excellence in sports may be related to the fine-tuning of specific anticipatory 'resonance' mechanisms that endow elite athletes' brains with the ability to predict others' actions ahead of their realization.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Hum Neurosci
                Front Hum Neurosci
                Front. Hum. Neurosci.
                Frontiers in Human Neuroscience
                Frontiers Media S.A.
                1662-5161
                30 October 2017
                2017
                : 11
                : 505
                Affiliations
                [1] 1Laboratory of Neuromotor Physiology, Santa Lucia Foundation , Rome, Italy
                [2] 2Department of Biomechanics, Institute of Sukan Negara , Kuala Lumpur, Malaysia
                [3] 3Department of Systems Medicine and Center of Space Biomedicine, University of Rome Tor Vergata , Rome, Italy
                [4] 4Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina , Messina, Italy
                Author notes

                Edited by: Christopher J. Hasson, Northeastern University, United States

                Reviewed by: Clint Hansen, University of Kiel, Germany; Olivier White, INSERM U1093, Université de Bourgogne Franche Comté, France

                *Correspondence: Antonella Maselli, a.maselli@ 123456hsantalucia.it
                Article
                10.3389/fnhum.2017.00505
                5674933
                29163094
                d89d3739-e741-4093-8bff-ca3f284ada4e
                Copyright © 2017 Maselli, Dhawan, Cesqui, Russo, Lacquaniti and d’Avella.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 18 July 2017
                : 06 October 2017
                Page count
                Figures: 9, Tables: 2, Equations: 1, References: 75, Pages: 19, Words: 0
                Funding
                Funded by: Horizon 2020 Framework Programme 10.13039/100010661
                Award ID: Robotics Program CogIMon (ICT-23- 2014 under Grant Agreement 644727)
                Funded by: Ministero dell’Istruzione, dell’Università e della Ricerca 10.13039/100010661
                Award ID: PRIN grant 2015HFWRYY_002
                Funded by: Agenzia Spaziale Italiana 10.13039/501100003981
                Award ID: contract n. I/006/06/0
                Categories
                Neuroscience
                Original Research

                Neurosciences
                biological motion perception,visual cues,predictions,inter-individual variability,overarm throwing,advanced information,dimensionality reduction,machine learning

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content65

                Cited by17

                Most referenced authors781