There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Rigorous evidence identification is essential for systematic reviews and meta‐analyses (evidence syntheses) because the sample selection of relevant studies determines a review's outcome, validity, and explanatory power. Yet, the search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which search systems are most appropriate for evidence synthesis and why. Advice on which search engines and bibliographic databases to choose for systematic searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic search qualities of 28 widely used academic search systems, including Google Scholar, PubMed, and Web of Science. A novel, query‐based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which search systems can effectively and efficiently perform (Boolean) searches with regards to precision, recall, and reproducibility. We found substantial differences in the performance of search systems, meaning that their usability in systematic searches varies. Indeed, only half of the search systems analyzed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal search system. We call for database owners to recognize the requirements of evidence synthesis and for academic journals to reassess quality requirements for systematic reviews. Our findings aim to support researchers in conducting better searches for better evidence synthesis.
Studies with positive results are greatly more represented in literature than studies with negative results, producing so-called publication bias. This review aims to discuss occurring problems around negative results and to emphasize the importance of reporting negative results. Underreporting of negative results introduces bias into meta-analysis, which consequently misinforms researchers, doctors and policymakers. More resources are potentially wasted on already disputed research that remains unpublished and therefore unavailable to the scientific community. Ethical obligations need to be considered when reporting results of studies on human subjects as people have exposed themselves to risk with the assurance that the study is performed to benefit others. Some studies disprove the common conception that journal editors preferably publish positive findings, which are considered as more citable. Therefore, all stakeholders, but especially researchers, need to be conscious of disseminating negative and positive findings alike.
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.