18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Frequency of multiple changes to prespecified primary outcomes of clinical trials completed between 2009 and 2017 in German university medical centers: A meta-research study

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Clinical trial registries allow assessment of deviations of published trials from their protocol, which may indicate a considerable risk of bias. However, since entries in many registries can be updated at any time, deviations may go unnoticed. We aimed to assess the frequency of changes to primary outcomes in different historical versions of registry entries, and how often they would go unnoticed if only deviations between published trial reports and the most recent registry entry are assessed.

          Methods and findings

          We analyzed the complete history of changes of registry entries in all 1746 randomized controlled trials completed at German university medical centers between 2009 and 2017, with published results up to 2022, that were registered in ClinicalTrials.gov or the German WHO primary registry (German Clinical Trials Register; DRKS). Data were retrieved on 24 January 2022. We assessed deviations between registry entries and publications in a random subsample of 292 trials. We determined changes of primary outcomes (1) between different versions of registry entries at key trial milestones, (2) between the latest registry entry version and the results publication, and (3) changes that occurred after trial start with no change between latest registry entry version and publication (so that assessing the full history of changes is required for detection of changes). We categorized changes as major if primary outcomes were added, dropped, changed to secondary outcomes, or secondary outcomes were turned into primary outcomes. We also assessed (4) the proportion of publications transparently reporting changes and (5) characteristics associated with changes. Of all 1746 trials, 23% (n = 393) had a primary outcome change between trial start and latest registry entry version, with 8% (n = 142) being major changes, that is, primary outcomes were added, dropped, changed to secondary outcomes, or secondary outcomes were turned into primary outcomes. Primary outcomes in publications were different from the latest registry entry version in 41% of trials (120 of the 292 sampled trials; 95% confidence interval (CI) [35%, 47%]), with major changes in 18% (54 of 292; 95% CI [14%, 23%]). Overall, 55% of trials (161 of 292; 95% CI [49%, 61%]) had primary outcome changes at any timepoint over the course of a trial, with 23% of trials (67 of 292; 95% CI [18%, 28%]) having major changes. Changes only within registry records, with no apparent discrepancy between latest registry entry version and publication, were observed in 14% of trials (41 of 292; 95% CI [10%, 19%]), with 4% (13 of 292; 95% CI [2%, 7%]) being major changes. One percent of trials with a change reported this in their publication (2 of 161 trials; 95% CI [0%, 4%]). An exploratory logistic regression analysis indicated that trials were less likely to have a discrepant registry entry if they were registered more recently (odds ratio (OR) 0.74; 95% CI [0.69, 0.80]; p<0.001), were not registered on ClinicalTrials.gov (OR 0.41; 95% CI [0.23, 0.70]; p = 0.002), or were not industry-sponsored (OR 0.29; 95% CI [0.21, 0.41]; p<0.001). Key limitations include some degree of subjectivity in the categorization of outcome changes and inclusion of a single geographic region.

          Conclusions

          In this study, we observed that changes to primary outcomes occur in 55% of trials, with 23% trials having major changes. They are rarely transparently reported in the results publication and often not visible in the latest registry entry version. More transparency is needed, supported by deeper analysis of registry entries to make these changes more easily recognizable.

          Protocol registration: Open Science Framework ( https://osf.io/t3qva; amendment in https://osf.io/qtd2b).

          Abstract

          In this meta-research study, Martin Holst and colleagues investigate changes to prespecified primary outcomes in the historic registries of clinical trials.

          Author summary

          Why was this study done?
          • Clinical trial registries are a key tool to increase the trustworthiness of clinical trials. They allow assessment of how closely a published trial follows its original plan.

          • However, registry entries can be updated at any time, which creates a trail of historical versions. If the latest registry entry version matches with the published trial report, important preceding changes might thus be unapparent to assessors at first glance.

          • Our objective was to investigate how often primary outcomes are changed in the trial registry over the course of a trial, and how often outcome changes are unapparent if one compares only the latest registry entry version to the publication.

          What did the researchers do and find?
          • We assessed all 1746 randomized controlled trials completed at German university medical centers between 2009 and 2017, that have a results publication and that had been registered in either an international or a German clinical trial registry. We determined the frequency of outcome changes between different versions of a registry entry, as well as the latest registry entry and the results publication.

          • We defined adding or dropping primary outcomes, changing them to secondary outcomes, or turning secondary outcomes into primary outcomes, as major changes.

          • We found that approximately 55% of trials had primary outcome changes at any timepoint over the course of a trial; 23% of trials had major changes. We observed changes that can be easily identified by comparing the published results to the latest registry entry in 41% of trials. In 14% of trials, however, the changes would require an in-depth look within the historical versions of that trial’s registry entry.

          • Only 1% of trials with changes (2 trials) reported this in the corresponding publications.

          What do these findings mean?
          • Our analysis suggests that changes to primary outcomes of a clinical trial are common, are often major, and have a potential to go unnoticed.

          • More transparency is needed, supported by deeper analysis of registry entries to reveal these outcome changes.

          Related collections

          Most cited references42

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

          The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects.

            (2013)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials

              Abstract Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document—intended to enhance the use, understanding, and dissemination of the CONSORT statement—has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials. “The whole of medicine depends on the transparent reporting of clinical trials.”1 Well designed and properly executed randomised controlled trials (RCTs) provide the most reliable evidence on the efficacy of healthcare interventions, but trials with inadequate methods are associated with bias, especially exaggerated treatment effects.2 3 4 5 Biased results from poorly designed and reported trials can mislead decision making in health care at all levels, from treatment decisions for a patient to formulation of national public health policies. Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report. Far from being transparent, the reporting of RCTs is often incomplete,6 7 8 9 compounding problems arising from poor methodology.10 11 12 13 14 15 Incomplete and inaccurate reporting Many reviews have documented deficiencies in reports of clinical trials. For example, information on the method used in a trial to assign participants to comparison groups was reported in only 21% of 519 trial reports indexed in PubMed in 2000,16 and only 34% of 616 reports indexed in 2006.17 Similarly, only 45% of trial reports indexed in PubMed in 200016 and 53% in 200617 defined a primary end point, and only 27% in 2000 and 45% in 2006 reported a sample size calculation. Reporting is not only often incomplete but also sometimes inaccurate. Of 119 reports stating that all participants were included in the analysis in the groups to which they were originally assigned (intention-to-treat analysis), 15 (13%) excluded patients or did not analyse all patients as allocated.18 Many other reviews have found that inadequate reporting is common in specialty journals16 19 and journals published in languages other than English.20 21 Proper randomisation reduces selection bias at trial entry and is the crucial component of high quality RCTs.22 Successful randomisation hinges on two steps: generation of an unpredictable allocation sequence and concealment of this sequence from the investigators enrolling participants (see box 1).2 23 Box 1: Treatment allocation. What’s so special about randomisation? The method used to assign interventions to trial participants is a crucial aspect of clinical trial design. Random assignment is the preferred method; it has been successfully used regularly in trials for more than 50 years.24 Randomisation has three major advantages.25 First, when properly implemented, it eliminates selection bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Without randomisation, treatment comparisons may be prejudiced, whether consciously or not, by selection of participants of a particular kind to receive a particular treatment. Second, random assignment permits the use of probability theory to express the likelihood that any difference in outcome between intervention groups merely reflects chance.26 Third, random allocation, in some situations, facilitates blinding the identity of treatments to the investigators, participants, and evaluators, possibly by use of a placebo, which reduces bias after assignment of treatments.27 Of these three advantages, reducing selection bias at trial entry is usually the most important.28 Successful randomisation in practice depends on two interrelated aspects—adequate generation of an unpredictable allocation sequence and concealment of that sequence until assignment occurs.2 23 A key issue is whether the schedule is known or predictable by the people involved in allocating participants to the comparison groups.29 The treatment allocation system should thus be set up so that the person enrolling participants does not know in advance which treatment the next person will get, a process termed allocation concealment.2 23 Proper allocation concealment shields knowledge of forthcoming assignments, whereas proper random sequences prevent correct anticipation of future assignments based on knowledge of past assignments. Unfortunately, despite that central role, reporting of the methods used for allocation of participants to interventions is also generally inadequate. For example, 5% of 206 reports of supposed RCTs in obstetrics and gynaecology journals described studies that were not truly randomised.23 This estimate is conservative, as most reports do not at present provide adequate information about the method of allocation.20 23 30 31 32 33 Improving the reporting of RCTs: the CONSORT statement DerSimonian and colleagues suggested that “editors could greatly improve the reporting of clinical trials by providing authors with a list of items that they expected to be strictly reported.”34 Early in the 1990s, two groups of journal editors, trialists, and methodologists independently published recommendations on the reporting of trials.35 36 In a subsequent editorial, Rennie urged the two groups to meet and develop a common set of recommendations 37; the outcome was the CONSORT statement (Consolidated Standards of Reporting Trials).38 The CONSORT statement (or simply CONSORT) comprises a checklist of essential items that should be included in reports of RCTs and a diagram for documenting the flow of participants through a trial. It is aimed at primary reports of RCTs with two group, parallel designs. Most of CONSORT is also relevant to a wider class of trial designs, such as non-inferiority, equivalence, factorial, cluster, and crossover trials. Extensions to the CONSORT checklist for reporting trials with some of these designs have been published,39 40 41 as have those for reporting certain types of data (harms 42), types of interventions (non-pharmacological treatments 43, herbal interventions44), and abstracts.45 The objective of CONSORT is to provide guidance to authors about how to improve the reporting of their trials. Trial reports need be clear, complete, and transparent. Readers, peer reviewers, and editors can also use CONSORT to help them critically appraise and interpret reports of RCTs. However, CONSORT was not meant to be used as a quality assessment instrument. Rather, the content of CONSORT focuses on items related to the internal and external validity of trials. Many items not explicitly mentioned in CONSORT should also be included in a report, such as information about approval by an ethics committee, obtaining informed consent from participants, and, where relevant, existence of a data safety and monitoring committee. In addition, any other aspects of a trial that are mentioned should be properly reported, such as information pertinent to cost effectiveness analysis.46 47 48 Since its publication in 1996, CONSORT has been supported by more than 400 journals (www.consort-statement.org) and several editorial groups, such as the International Committee of Medical Journal Editors.49 The introduction of CONSORT within journals is associated with improved quality of reports of RCTs.17 50 51 However, CONSORT is an ongoing initiative, and the CONSORT statement is revised periodically.3 CONSORT was last revised nine years ago, in 2001.52 53 54 Since then the evidence base to inform CONSORT has grown considerably; empirical data have highlighted new concerns regarding the reporting of RCTs, such as selective outcome reporting.55 56 57 A CONSORT Group meeting was therefore convened in January 2007, in Canada, to revise the 2001 CONSORT statement and its accompanying explanation and elaboration document. The revised checklist is shown in table 1 and the flow diagram, not revised, in fig 1.52 53 54 Table 1  CONSORT 2010 checklist of information to include when reporting a randomised trial* Section/Topic Item No Checklist item Reported on page No Title and abstract 1a Identification as a randomised trial in the title 1b Structured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts45 65) Introduction Background and objectives 2a Scientific background and explanation of rationale 2b Specific objectives or hypotheses Methods Trial design 3a Description of trial design (such as parallel, factorial) including allocation ratio 3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons Participants 4a Eligibility criteria for participants 4b Settings and locations where the data were collected Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed 6b Any changes to trial outcomes after the trial commenced, with reasons Sample size 7a How sample size was determined 7b When applicable, explanation of any interim analyses and stopping guidelines Randomisation:  Sequence generation 8a Method used to generate the random allocation sequence 8b Type of randomisation; details of any restriction (such as blocking and block size)  Allocation concealment mechanism 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned  Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how 11b If relevant, description of the similarity of interventions Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes 12b Methods for additional analyses, such as subgroup analyses and adjusted analyses Results Participant flow (a diagram is strongly recommended) 13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome 13b For each group, losses and exclusions after randomisation, together with reasons Recruitment 14a Dates defining the periods of recruitment and follow-up 14b Why the trial ended or was stopped Baseline data 15 A table showing baseline demographic and clinical characteristics for each group Numbers analysed 16 For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) 17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory Harms 19 All important harms or unintended effects in each group (for specific guidance see CONSORT for harms42) Discussion Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses Generalisability 21 Generalisability (external validity, applicability) of the trial findings Interpretation 22 Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence Other information Registration 23 Registration number and name of trial registry Protocol 24 Where the full trial protocol can be accessed, if available Funding 25 Sources of funding and other support (such as supply of drugs), role of funders *We strongly recommend reading this statement in conjunction with the CONSORT 2010 Explanation and Elaboration for important clarifications on all the items. If relevant, we also recommend reading CONSORT extensions for cluster randomised trials,40 non-inferiority and equivalence trials,39 non-pharmacological treatments,43 herbal interventions,44 and pragmatic trials.41 Additional extensions are forthcoming: for those and for up to date references relevant to this checklist, see www.consort-statement.org. Fig 1 Flow diagram of the progress through the phases of a parallel randomised trial of two groups (that is, enrolment, intervention allocation, follow-up, and data analysis)52 53 54 The CONSORT 2010 Statement: explanation and elaboration During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. The CONSORT explanation and elaboration article58 was published in 2001 alongside the 2001 version of the CONSORT statement. It discussed the rationale and scientific background for each item and provided published examples of good reporting. The rationale for revising that article is similar to that for revising the statement, described above. We briefly describe below the main additions and deletions to this version of the explanation and elaboration article. The CONSORT 2010 Explanation and Elaboration: changes We have made several substantive and some cosmetic changes to this version of the CONSORT explanatory document (full details are highlighted in the 2010 version of the CONSORT statement59). Some reflect changes to the CONSORT checklist; there are three new checklist items in the CONSORT 2010 checklist—such as item 24, which asks authors to report where their trial protocol can be accessed. We have also updated some existing explanations, including adding more recent references to methodological evidence, and used some better examples. We have removed the glossary, which is now available on the CONSORT website (www.consort-statement.org). Where possible, we describe the findings of relevant empirical studies. Many excellent books on clinical trials offer fuller discussion of methodological issues.60 61 62 Finally, for convenience, we sometimes refer to “treatments” and “patients,” although we recognise that not all interventions evaluated in RCTs are treatments and not all participants are patients. Checklist items Title and abstract Item 1a. Identification as a randomised trial in the title. Example—“Smoking reduction with oral nicotine inhalers: double blind, randomised clinical trial of efficacy and safety.”63 Explanation—The ability to identify a report of a randomised trial in an electronic database depends to a large extent on how it was indexed. Indexers may not classify a report as a randomised trial if the authors do not explicitly report this information.64 To help ensure that a study is appropriately indexed and easily identified, authors should use the word “randomised” in the title to indicate that the participants were randomly assigned to their comparison groups. Item 1b. Structured summary of trial design, methods, results, and conclusions For specific guidance see CONSORT for abstracts.45 65 Explanation—Clear, transparent, and sufficiently detailed abstracts are important because readers often base their assessment of a trial on such information. Some readers use an abstract as a screening tool to decide whether to read the full article. However, as not all trials are freely available and some health professionals do not have access to the full trial reports, healthcare decisions are sometimes made on the basis of abstracts of randomised trials.66 A journal abstract should contain sufficient information about a trial to serve as an accurate record of its conduct and findings, providing optimal information about the trial within the space constraints and format of a journal. A properly constructed and written abstract helps individuals to assess quickly the relevance of the findings and aids the retrieval of relevant reports from electronic databases.67 The abstract should accurately reflect what is included in the full journal article and should not include information that does not appear in the body of the paper. Studies comparing the accuracy of information reported in a journal abstract with that reported in the text of the full publication have found claims that are inconsistent with, or missing from, the body of the full article.68 69 70 71 Conversely, omitting important harms from the abstract could seriously mislead someone’s interpretation of the trial findings.42 72 A recent extension to the CONSORT statement provides a list of essential items that authors should include when reporting the main results of a randomised trial in a journal (or conference) abstract (see table 2).45 We strongly recommend the use of structured abstracts for reporting randomised trials. They provide readers with information about the trial under a series of headings pertaining to the design, conduct, analysis, and interpretation.73 Some studies have found that structured abstracts are of higher quality than the more traditional descriptive abstracts74 75 and that they allow readers to find information more easily.76 We recognise that many journals have developed their own structure and word limit for reporting abstracts. It is not our intention to suggest changes to these formats, but to recommend what information should be reported. Table 2  Items to include when reporting a randomised trial in a journal abstract Item Description Authors Contact details for the corresponding author Trial design Description of the trial design (such as parallel, cluster, non-inferiority) Methods:  Participants Eligibility criteria for participants and the settings where the data were collected  Interventions Interventions intended for each group  Objective Specific objective or hypothesis  Outcome Clearly defined primary outcome for this report  Randomisation How participants were allocated to interventions  Blinding (masking) Whether participants, care givers, and those assessing the outcomes were blinded to group assignment Results:  Numbers randomised Number of participants randomised to each group  Recruitment Trial status  Numbers analysed Number of participants analysed in each group  Outcome For the primary outcome, a result for each group and the estimated effect size and its precision  Harms Important adverse events or side effects Conclusions General interpretation of the results Trial registration Registration number and name of trial register Funding Source of funding Introduction Item 2a. Scientific background and explanation of rationale Example—“Surgery is the treatment of choice for patients with disease stage I and II non-small cell lung cancer (NSCLC) … An NSCLC meta-analysis combined the results from eight randomised trials of surgery versus surgery plus adjuvant cisplatin-based chemotherapy and showed a small, but not significant (p=0.08), absolute survival benefit of around 5% at 5 years (from 50% to 55%). At the time the current trial was designed (mid-1990s), adjuvant chemotherapy had not become standard clinical practice … The clinical rationale for neo-adjuvant chemotherapy is three-fold: regression of the primary cancer could be achieved thereby facilitating and simplifying or reducing subsequent surgery; undetected micro-metastases could be dealt with at the start of treatment; and there might be inhibition of the putative stimulus to residual cancer by growth factors released by surgery and by subsequent wound healing … The current trial was therefore set up to compare, in patients with resectable NSCLC, surgery alone versus three cycles of platinum-based chemotherapy followed by surgery in terms of overall survival, quality of life, pathological staging, resectability rates, extent of surgery, and time to and site of relapse.”77 Explanation—Typically, the introduction consists of free flowing text, in which authors explain the scientific background and rationale for their trial, and its general outline. It may also be appropriate to include here the objectives of the trial (see item 2b).The rationale may be explanatory (for example, to assess the possible influence of a drug on renal function) or pragmatic (for example, to guide practice by comparing the benefits and harms of two treatments). Authors should report any evidence of the benefits and harms of active interventions included in a trial and should suggest a plausible explanation for how the interventions might work, if this is not obvious.78 The Declaration of Helsinki states that biomedical research involving people should be based on a thorough knowledge of the scientific literature.79 That is, it is unethical to expose humans unnecessarily to the risks of research. Some clinical trials have been shown to have been unnecessary because the question they addressed had been or could have been answered by a systematic review of the existing literature.80 81 Thus, the need for a new trial should be justified in the introduction. Ideally, it should include a reference to a systematic review of previous similar trials or a note of the absence of such trials.82 Item 2b. Specific objectives or hypotheses Example—“In the current study we tested the hypothesis that a policy of active management of nulliparous labour would: 1. reduce the rate of caesarean section, 2. reduce the rate of prolonged labour; 3. not influence maternal satisfaction with the birth experience.”83 Explanation—Objectives are the questions that the trial was designed to answer. They often relate to the efficacy of a particular therapeutic or preventive intervention. Hypotheses are pre-specified questions being tested to help meet the objectives. Hypotheses are more specific than objectives and are amenable to explicit statistical evaluation. In practice, objectives and hypotheses are not always easily differentiated. Most reports of RCTs provide adequate information about trial objectives and hypotheses.84 Methods Item 3a. Description of trial design (such as parallel, factorial) including allocation ratio Example—“This was a multicenter, stratified (6 to 11 years and 12 to 17 years of age, with imbalanced randomisation [2:1]), double-blind, placebo-controlled, parallel-group study conducted in the United States (41 sites).”85 Explanation—The word “design” is often used to refer to all aspects of how a trial is set up, but it also has a narrower interpretation. Many specific aspects of the broader trial design, including details of randomisation and blinding, are addressed elsewhere in the CONSORT checklist. Here we seek information on the type of trial, such as parallel group or factorial, and the conceptual framework, such as superiority or non-inferiority, and other related issues not addressed elsewhere in the checklist. The CONSORT statement focuses mainly on trials with participants individually randomised to one of two “parallel” groups. In fact, little more than half of published trials have such a design.16 The main alternative designs are multi-arm parallel, crossover, cluster,40 and factorial designs. Also, most trials are set to identify the superiority of a new intervention, if it exists, but others are designed to assess non-inferiority or equivalence.39 It is important that researchers clearly describe these aspects of their trial, including the unit of randomisation (such as patient, GP practice, lesion). It is desirable also to include these details in the abstract (see item 1b). If a less common design is employed, authors are encouraged to explain their choice, especially as such designs may imply the need for a larger sample size or more complex analysis and interpretation. Although most trials use equal randomisation (such as 1:1 for two groups), it is helpful to provide the allocation ratio explicitly. For drug trials, specifying the phase of the trial (I-IV) may also be relevant. Item 3b. Important changes to methods after trial commencement (such as eligibility criteria), with reasons Example—“Patients were randomly assigned to one of six parallel groups, initially in 1:1:1:1:1:1 ratio, to receive either one of five otamixaban … regimens … or an active control of unfractionated heparin … an independent Data Monitoring Committee reviewed unblinded data for patient safety; no interim analyses for efficacy or futility were done. During the trial, this committee recommended that the group receiving the lowest dose of otamixaban (0·035 mg/kg/h) be discontinued because of clinical evidence of inadequate anticoagulation. The protocol was immediately amended in accordance with that recommendation, and participants were subsequently randomly assigned in 2:2:2:2:1 ratio to the remaining otamixaban and control groups, respectively.”86 Explanation—A few trials may start without any fixed plan (that is, are entirely exploratory), but the most will have a protocol that specifies in great detail how the trial will be conducted. There may be deviations from the original protocol, as it is impossible to predict every possible change in circumstances during the course of a trial. Some trials will therefore have important changes to the methods after trial commencement. Changes could be due to external information becoming available from other studies, or internal financial difficulties, or could be due to a disappointing recruitment rate. Such protocol changes should be made without breaking the blinding on the accumulating data on participants’ outcomes. In some trials, an independent data monitoring committee will have as part of its remit the possibility of recommending protocol changes based on seeing unblinded data. Such changes might affect the study methods (such as changes to treatment regimens, eligibility criteria, randomisation ratio, or duration of follow-up) or trial conduct (such as dropping a centre with poor data quality).87 Some trials are set up with a formal “adaptive” design. There is no universally accepted definition of these designs, but a working definition might be “a multistage study design that uses accumulating data to decide how to modify aspects of the study without undermining the validity and integrity of the trial.”88 The modifications are usually to the sample sizes and the number of treatment arms and can lead to decisions being made more quickly and with more efficient use of resources. There are, however, important ethical, statistical, and practical issues in considering such a design.89 90 Whether the modifications are explicitly part of the trial design or in response to changing circumstances, it is essential that they are fully reported to help the reader interpret the results. Changes from protocols are not currently well reported. A review of comparisons with protocols showed that about half of journal articles describing RCTs had an unexplained discrepancy in the primary outcomes.57 Frequent unexplained discrepancies have also been observed for details of randomisation, blinding,91 and statistical analyses.92 Item 4a. Eligibility criteria for participants Example—“Eligible participants were all adults aged 18 or over with HIV who met the eligibility criteria for antiretroviral therapy according to the Malawian national HIV treatment guidelines (WHO clinical stage III or IV or any WHO stage with a CD4 count 2 group) parallel group trials need the least modification of the standard CONSORT guidance. The flow diagram can be extended easily. The main differences from trials with two groups relate to clarification of how the study hypotheses relate to the multiple groups, and the consequent methods of data analysis and interpretation. For factorial trials, the possibility of interaction between the interventions generally needs to be considered. In addition to overall comparisons of participants who did or did not receive each intervention under study, investigators should consider also reporting results for each treatment combination.303 In crossover trials, each participant receives two (or more) treatments in a random order. The main additional issues to address relate to the paired nature of the data, which affect design and analysis.304 Similar issues affect within-person comparisons, in which participants receive two treatments simultaneously (often to paired organs). Also, because of the risk of temporal or systemic carryover effects, respectively, in both cases the choice of design needs justification. The CONSORT Group intends to publish extensions to CONSORT to cover all these designs. In addition, we will publish updates to existing guidance for cluster randomised trials and non-inferiority and equivalence trials to take account of this major update of the generic CONSORT guidance. Discussion Assessment of healthcare interventions can be misleading unless investigators ensure unbiased comparisons. Random allocation to study groups remains the only method that eliminates selection and confounding biases. Non-randomised trials tend to result in larger estimated treatment effects than randomised trials.305 306 Bias jeopardises even RCTs, however, if investigators carry out such trials improperly.307 A recent systematic review, aggregating the results of several methodological investigations, found that, for subjective outcomes, trials that used inadequate or unclear allocation concealment yielded 31% larger estimates of effect than those that used adequate concealment, and trials that were not blinded yielded 25% larger estimates.153 As might be expected, there was a strong association between the two. The design and implementation of an RCT require methodological as well as clinical expertise, meticulous effort,143 308 and a high level of alertness for unanticipated difficulties. Reports of RCTs should be written with similarly close attention to reducing bias. Readers should not have to speculate; the methods used should be complete and transparent so that readers can readily differentiate trials with unbiased results from those with questionable results. Sound science encompasses adequate reporting, and the conduct of ethical trials rests on the footing of sound science.309 We hope this update of the CONSORT explanatory article will assist authors in using the 2010 version of CONSORT and explain in general terms the importance of adequately reporting of trials. The CONSORT statement can help researchers designing trials in future310 and can guide peer reviewers and editors in their evaluation of manuscripts. Indeed, we encourage peer reviewers and editors to use the CONSORT checklist to assess whether authors have reported on these items. Such assessments will likely improve the clarity and transparency of published trials. Because CONSORT is an evolving document, it requires a dynamic process of continual assessment, refinement, and, if necessary, change, which is why we have this update of the checklist and explanatory article. As new evidence and critical comments accumulate, we will evaluate the need for future updates. The first version of the CONSORT statement, from 1996, seems to have led to improvement in the quality of reporting of RCTs in the journals that have adopted it.50 51 52 53 54. Other groups are using the CONSORT template to improve the reporting of other research designs, such as diagnostic tests311 and observational studies.312 The CONSORT website (www.consort-statement.org) has been established to provide educational material and a repository database of materials relevant to the reporting of RCTs. The site includes many examples from real trials, including all of the examples included in this article. We will continue to add good and bad examples of reporting to the database, and we invite readers to submit further suggestions by contacting us through the website. The CONSORT Group will continue to survey the literature to find relevant articles that address issues relevant to the reporting of RCTs, and we invite authors of any such articles to notify us about them. All of this information will be made accessible through the CONSORT website, which is updated regularly. More than 400 leading general and specialty journals and biomedical editorial groups, including the ICMJE, World Association of Medical Journal Editors, and the Council of Science Editors, have given their official support to CONSORT. We invite other journals concerned about the quality of reporting of clinical trials to endorse the CONSORT statement and contact us through our website to let us know of their support. The ultimate benefactors of these collective efforts should be people who, for whatever reason, require intervention from the healthcare community.
                Bookmark

                Author and article information

                Contributors
                Role: ConceptualizationRole: Data curationRole: Formal analysisRole: InvestigationRole: MethodologyRole: VisualizationRole: Writing – original draft
                Role: Formal analysisRole: InvestigationRole: MethodologyRole: VisualizationRole: Writing – original draft
                Role: InvestigationRole: Writing – review & editing
                Role: ConceptualizationRole: Funding acquisitionRole: Writing – review & editing
                Role: ConceptualizationRole: MethodologyRole: SupervisionRole: Writing – review & editing
                Role: ConceptualizationRole: MethodologyRole: SoftwareRole: SupervisionRole: ValidationRole: Writing – review & editing
                Journal
                PLoS Med
                PLoS Med
                plos
                PLOS Medicine
                Public Library of Science (San Francisco, CA USA )
                1549-1277
                1549-1676
                31 October 2023
                October 2023
                : 20
                : 10
                : e1004306
                Affiliations
                [1 ] QUEST Center for Responsible Research, Berlin Institute of Health at Charité–Universitätsmedizin Berlin, Berlin, Germany
                [2 ] Institute for Ethics, History and Philosophy of Medicine, Medizinische Hochschule Hannover, Hannover, Germany
                [3 ] Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
                [4 ] Meta-Research Innovation Center Berlin, QUEST Center for Responsible Research, Berlin Institute of Health at Charité–Universitätsmedizin Berlin, Berlin, Germany
                [5 ] Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California, United States of America
                [6 ] Pragmatic Evidence Lab, Research Center for Clinical Neuroimmunology and Neuroscience (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland
                Author notes

                The authors have declared that no competing interests exist.

                ‡ MH and MH share first authorship on this work. LGH and BGC are joint senior authors on this work.

                Author information
                https://orcid.org/0000-0002-8135-6265
                https://orcid.org/0000-0002-8067-2856
                https://orcid.org/0000-0002-0748-383X
                https://orcid.org/0000-0002-9153-079X
                https://orcid.org/0000-0002-3444-1432
                https://orcid.org/0000-0001-8975-0649
                Article
                PMEDICINE-D-23-00549
                10.1371/journal.pmed.1004306
                10645365
                37906614
                5721e84d-b1ac-4ce6-81fd-78331cfecb0f
                © 2023 Holst et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 1 March 2023
                : 3 October 2023
                Page count
                Figures: 3, Tables: 3, Pages: 18
                Funding
                Funded by: funder-id http://dx.doi.org/10.13039/501100002347, Bundesministerium für Bildung und Forschung;
                Award ID: 01PW18012
                Award Recipient :
                This work was funded under a grant from the Federal Ministry of Education and Research of Germany (Bundesministerium fuer Bildung und Forschung – BMBF) [01PW18012], awarded to DS. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Medicine and Health Sciences
                Clinical Medicine
                Clinical Trials
                Medicine and Health Sciences
                Pharmacology
                Drug Research and Development
                Clinical Trials
                Research and Analysis Methods
                Clinical Trials
                Medicine and Health Sciences
                Clinical Medicine
                Clinical Trials
                Randomized Controlled Trials
                Medicine and Health Sciences
                Pharmacology
                Drug Research and Development
                Clinical Trials
                Randomized Controlled Trials
                Research and Analysis Methods
                Clinical Trials
                Randomized Controlled Trials
                Medicine and Health Sciences
                Clinical Medicine
                Clinical Trials
                Clinical Trial Reporting
                Medicine and Health Sciences
                Pharmacology
                Drug Research and Development
                Clinical Trials
                Clinical Trial Reporting
                Research and Analysis Methods
                Clinical Trials
                Clinical Trial Reporting
                Medicine and Health Sciences
                Medical Humanities
                Medical Journals
                Engineering and Technology
                Measurement
                Engineering and Technology
                Measurement
                Time Measurement
                Medicine and Health Sciences
                Biology and Life Sciences
                Immunology
                Medicine and Health Sciences
                Immunology
                Custom metadata
                vor-update-to-uncorrected-proof
                2023-11-14
                The final dataset has been posted to our Open Science Framework page ( https://osf.io/e2uct/). Analysis code and original datasets are available on Github ( https://github.com/Martin-R-H/InvisibleOutcomeChanges).

                Medicine
                Medicine

                Comments

                Comment on this article