5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      An evaluation of DistillerSR’s machine learning-based prioritization tool for title/abstract screening – impact on reviewer-relevant outcomes

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Systematic reviews often require substantial resources, partially due to the large number of records identified during searching. Although artificial intelligence may not be ready to fully replace human reviewers, it may accelerate and reduce the screening burden. Using DistillerSR (May 2020 release), we evaluated the performance of the prioritization simulation tool to determine the reduction in screening burden and time savings.

          Methods

          Using a true recall @ 95%, response sets from 10 completed systematic reviews were used to evaluate: (i) the reduction of screening burden; (ii) the accuracy of the prioritization algorithm; and (iii) the hours saved when a modified screening approach was implemented. To account for variation in the simulations, and to introduce randomness (through shuffling the references), 10 simulations were run for each review. Means, standard deviations, medians and interquartile ranges (IQR) are presented.

          Results

          Among the 10 systematic reviews, using true recall @ 95% there was a median reduction in screening burden of 47.1% (IQR: 37.5 to 58.0%). A median of 41.2% (IQR: 33.4 to 46.9%) of the excluded records needed to be screened to achieve true recall @ 95%. The median title/abstract screening hours saved using a modified screening approach at a true recall @ 95% was 29.8 h (IQR: 28.1 to 74.7 h). This was increased to a median of 36 h (IQR: 32.2 to 79.7 h) when considering the time saved not retrieving and screening full texts of the remaining 5% of records not yet identified as included at title/abstract. Among the 100 simulations (10 simulations per review), none of these 5% of records were a final included study in the systematic review. The reduction in screening burden to achieve true recall @ 95% compared to @ 100% resulted in a reduced screening burden median of 40.6% (IQR: 38.3 to 54.2%).

          Conclusions

          The prioritization tool in DistillerSR can reduce screening burden. A modified or stop screening approach once a true recall @ 95% is achieved appears to be a valid method for rapid reviews, and perhaps systematic reviews. This needs to be further evaluated in prospective reviews using the estimated recall.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          A scoping review of rapid review methods

          Background Rapid reviews are a form of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a timely manner. Although numerous centers are conducting rapid reviews internationally, few studies have examined the methodological characteristics of rapid reviews. We aimed to examine articles, books, and reports that evaluated, compared, used or described rapid reviews or methods through a scoping review. Methods MEDLINE, EMBASE, the Cochrane Library, internet websites of rapid review producers, and reference lists were searched to identify articles for inclusion. Two reviewers independently screened literature search results and abstracted data from included studies. Descriptive analysis was conducted. Results We included 100 articles plus one companion report that were published between 1997 and 2013. The studies were categorized as 84 application papers, seven development papers, six impact papers, and four comparison papers (one was included in two categories). The rapid reviews were conducted between 1 and 12 months, predominantly in Europe (58 %) and North America (20 %). The included studies failed to report 6 % to 73 % of the specific systematic review steps examined. Fifty unique rapid review methods were identified; 16 methods occurred more than once. Streamlined methods that were used in the 82 rapid reviews included limiting the literature search to published literature (24 %) or one database (2 %), limiting inclusion criteria by date (68 %) or language (49 %), having one person screen and another verify or screen excluded studies (6 %), having one person abstract data and another verify (23 %), not conducting risk of bias/quality appraisal (7 %) or having only one reviewer conduct the quality appraisal (7 %), and presenting results as a narrative summary (78 %). Four case studies were identified that compared the results of rapid reviews to systematic reviews. Three studies found that the conclusions between rapid reviews and systematic reviews were congruent. Conclusions Numerous rapid review approaches were identified and few were used consistently in the literature. Poor quality of reporting was observed. A prospective study comparing the results from rapid reviews to those obtained through systematic reviews is warranted. Electronic supplementary material The online version of this article (doi:10.1186/s12916-015-0465-6) contains supplementary material, which is available to authorized users.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Evidence summaries: the evolution of a rapid review approach

            Background Rapid reviews have emerged as a streamlined approach to synthesizing evidence - typically for informing emergent decisions faced by decision makers in health care settings. Although there is growing use of rapid review 'methods', and proliferation of rapid review products, there is a dearth of published literature on rapid review methodology. This paper outlines our experience with rapidly producing, publishing and disseminating evidence summaries in the context of our Knowledge to Action (KTA) research program. Methods The KTA research program is a two-year project designed to develop and assess the impact of a regional knowledge infrastructure that supports evidence-informed decision making by regional managers and stakeholders. As part of this program, we have developed evidence summaries - our form of rapid review - which have come to be a flagship component of this project. Our eight-step approach for producing evidence summaries has been developed iteratively, based on evidence (where available), experience and knowledge user feedback. The aim of our evidence summary approach is to deliver quality evidence that is both timely and user-friendly. Results From November 2009 to March 2011 we have produced 11 evidence summaries on a diverse range of questions identified by our knowledge users. Topic areas have included questions of clinical effectiveness to questions on health systems and/or health services. Knowledge users have reported evidence summaries to be of high value in informing their decisions and initiatives. We continue to experiment with incorporating more of the established methods of systematic reviews, while maintaining our capacity to deliver a final product in a timely manner. Conclusions The evolution of the KTA rapid review evidence summaries has been a positive one. We have developed an approach that appears to be addressing a need by knowledge users for timely, user-friendly, and trustworthy evidence and have transparently reported these methods here for the wider rapid review and scientific community.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Living systematic review: 1. Introduction—the why, what, when, and how

                Bookmark

                Author and article information

                Contributors
                cahamel@ohri.ca
                Journal
                BMC Med Res Methodol
                BMC Med Res Methodol
                BMC Medical Research Methodology
                BioMed Central (London )
                1471-2288
                15 October 2020
                15 October 2020
                2020
                : 20
                : 256
                Affiliations
                [1 ]GRID grid.412687.e, ISNI 0000 0000 9606 5108, Clinical Epidemiology Program, Ottawa Hospital Research Institute, ; 501 Smyth Road, Box 201b, Ottawa, Ontario K1H 8L6 Canada
                [2 ]GRID grid.38603.3e, ISNI 0000 0004 0644 1675, Department of Medicine, , University of Split, ; Split, Croatia
                [3 ]GRID grid.28046.38, ISNI 0000 0001 2182 2255, Cardiovascular Research Methods Centre, , University of Ottawa Heart Institute, ; Ottawa, Ontario Canada
                [4 ]GRID grid.28046.38, ISNI 0000 0001 2182 2255, School of Epidemiology and Public Health, , University of Ottawa, ; Ottawa, Ontario Canada
                [5 ]GRID grid.14709.3b, ISNI 0000 0004 1936 8649, Department of Psychology, , McGill University, ; Montreal, Quebec Canada
                Author information
                http://orcid.org/0000-0002-5871-2137
                Article
                1129
                10.1186/s12874-020-01129-1
                7559198
                33059590
                c4f74cd4-db93-42c8-8964-64b35895f109
                © The Author(s) 2020

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 11 June 2020
                : 22 September 2020
                Funding
                Funded by: CIHR
                Categories
                Research Article
                Custom metadata
                © The Author(s) 2020

                Medicine
                artificial intelligence,systematic reviews,rapid reviews,prioritization,automation,natural language processing,machine learning,time savings,efficiency,true recall

                Comments

                Comment on this article