21
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases

      research-article
      * ,
      Frontiers in Psychology
      Frontiers Media S.A.
      effect size, Cohen, statistical power, sample size, replicability

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = 0.36) were much larger than effects from the latter (median r = 0.16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: not found

          Generalized eta and omega squared statistics: measures of effect size for some common research designs.

          The editorial policies of several prominent educational and psychological journals require that researchers report some measure of effect size along with tests for statistical significance. In analysis of variance contexts, this requirement might be met by using eta squared or omega squared statistics. Current procedures for computing these measures of effect often do not consider the effect that design features of the study have on the size of these statistics. Because research-design features can have a large effect on the estimated proportion of explained variance, the use of partial eta or omega squared can be misleading. The present article provides formulas for computing generalized eta and omega squared statistics, which provide estimates of effect size that are comparable across a variety of research designs.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            One Hundred Years of Social Psychology Quantitatively Described.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              An Agenda for Purely Confirmatory Research.

              The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology's academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result-a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label "confirmatory," and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled "exploratory." We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                11 April 2019
                2019
                : 10
                : 813
                Affiliations
                Department of Psychology, Chemnitz University of Technology , Chemnitz, Germany
                Author notes

                Edited by: Fabian Gander, University of Zurich, Switzerland

                Reviewed by: Marcel Van Assen, Tilburg University, Netherlands; Julio Sánchez-Meca, University of Murcia, Spain

                *Correspondence: Thomas Schäfer, thomas.schaefer@ 123456psychologie.tu-chemnitz.de

                This article was submitted to Quantitative Psychology and Measurement, a section of the journal Frontiers in Psychology

                Article
                10.3389/fpsyg.2019.00813
                6470248
                31031679
                c76d36b4-c6c9-4f63-927f-d475f7a545e6
                Copyright © 2019 Schäfer and Schwarz.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 13 December 2018
                : 26 March 2019
                Page count
                Figures: 6, Tables: 3, Equations: 0, References: 46, Pages: 13, Words: 0
                Funding
                Funded by: Technische Universität Chemnitz 10.13039/100009117
                Funded by: Deutsche Forschungsgemeinschaft 10.13039/501100001659
                Categories
                Psychology
                Original Research

                Clinical Psychology & Psychiatry
                effect size,cohen,statistical power,sample size,replicability
                Clinical Psychology & Psychiatry
                effect size, cohen, statistical power, sample size, replicability

                Comments

                Comment on this article