7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Third-party punishment by preverbal infants

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Third-party punishment of antisocial others is unique to humans and seems to be universal across cultures. However, its emergence in ontogeny remains unknown. We developed a participatory cognitive paradigm using gaze-contingency techniques, in which infants can use their gaze to affect agents displayed on a monitor. In this paradigm, fixation on an agent triggers the event of a stone crushing the agent. Throughout five experiments (total N = 120), we show that eight-month-old infants punished antisocial others. Specifically, infants increased their selective looks at the aggressor after watching aggressive interactions. Additionally, three control experiments excluded alternative interpretations of their selective gaze, suggesting that punishment-related decision-making influenced looking behaviour. These findings indicate that a disposition for third-party punishment of antisocial others emerges in early infancy and emphasize the importance of third-party punishment for human cooperation. This behavioural tendency may be a human trait acquired over the course of evolution.

          Related collections

          Most cited references46

          • Record: found
          • Abstract: not found
          • Article: not found

          brms: An R Package for Bayesian Multilevel Models Using Stan

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Random effects structure for confirmatory hypothesis testing: Keep it maximal.

            Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F 1 and F 2 tests, and in many cases, even worse than F 1 alone. Maximal LMEMs should be the 'gold standard' for confirmatory hypothesis testing in psycholinguistics and beyond.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Stan: A Probabilistic Programming Language

              Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Nature Human Behaviour
                Nat Hum Behav
                Springer Science and Business Media LLC
                2397-3374
                June 09 2022
                Article
                10.1038/s41562-022-01354-2
                9ec27885-85b0-46bb-a95f-160da7f2137f
                © 2022

                https://creativecommons.org/licenses/by/4.0

                https://creativecommons.org/licenses/by/4.0

                History

                Comments

                Comment on this article