53
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Designing and undertaking randomised implementation trials: guide for researchers

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Implementation science is the study of methods to promote the systematic uptake of evidence based interventions into practice and policy to improve health. Despite the need for high quality evidence from implementation research, randomised trials of implementation strategies often have serious limitations. These limitations include high risks of bias, limited use of theory, a lack of standard terminology to describe implementation strategies, narrowly focused implementation outcomes, and poor reporting. This paper aims to improve the evidence base in implementation science by providing guidance on the development, conduct, and reporting of randomised trials of implementation strategies. Established randomised trial methods from seminal texts and recent developments in implementation science were consolidated by an international group of researchers, health policy makers, and practitioners. This article provides guidance on the key components of randomised trials of implementation strategies, including articulation of trial aims, trial recruitment and retention strategies, randomised design selection, use of implementation science theory and frameworks, measures, sample size calculations, ethical review, and trial reporting. It also focuses on topics requiring special consideration or adaptation for implementation trials. We propose this guide as a resource for researchers, healthcare and public health policy makers or practitioners, research funders, and journal editors with the goal of advancing rigorous conduct and reporting of randomised trials of implementation strategies.

          Related collections

          Most cited references150

          • Record: found
          • Abstract: not found
          • Article: not found

          RoB 2: a revised tool for assessing risk of bias in randomised trials

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science

            Background Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. Methods We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. Results The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. Conclusion The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found

              Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide

              Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face to face panel meeting. The resultant 12 item TIDieR checklist (brief name, why, what (materials), what (procedure), who provided, how, where, when and how much, tailoring, modifications, how well (planned), how well (actual)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with an explanation and elaboration for each item, and examples of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                BMJ
                BMJ
                BMJ
                1756-1833
                January 18 2021
                : m3721
                Article
                10.1136/bmj.m3721
                a6b25d97-ddc6-400b-aff5-0f76d91599d1
                © 2021

                Free to read

                http://creativecommons.org/licenses/by-nc/4.0/

                History

                Comments

                Comment on this article