15
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Hypothesis testing in Bayesian network meta-analysis

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Network meta-analysis is an extension of the classical pairwise meta-analysis and allows to compare multiple interventions based on both head-to-head comparisons within trials and indirect comparisons across trials. Bayesian or frequentist models are applied to obtain effect estimates with credible or confidence intervals. Furthermore, p-values or similar measures may be helpful for the comparison of the included arms but related methods are not yet addressed in the literature. In this article, we discuss how hypothesis testing can be done in a Bayesian network meta-analysis.

          Methods

          An index is presented and discussed in a Bayesian modeling framework. Simulation studies were performed to evaluate the characteristics of this index. The approach is illustrated by a real data example.

          Results

          The simulation studies revealed that the type I error rate is controlled. The approach can be applied in a superiority as well as in a non-inferiority setting.

          Conclusions

          Test decisions can be based on the proposed index. The index may be a valuable complement to the commonly reported results of network meta-analyses. The method is easy to apply and of no (noticeable) additional computational cost.

          Related collections

          Most cited references29

          • Record: found
          • Abstract: found
          • Article: not found

          Checking consistency in mixed treatment comparison meta-analysis.

          Pooling of direct and indirect evidence from randomized trials, known as mixed treatment comparisons (MTC), is becoming increasingly common in the clinical literature. MTC allows coherent judgements on which of the several treatments is the most effective and produces estimates of the relative effects of each treatment compared with every other treatment in a network.We introduce two methods for checking consistency of direct and indirect evidence. The first method (back-calculation) infers the contribution of indirect evidence from the direct evidence and the output of an MTC analysis and is useful when the only available data consist of pooled summaries of the pairwise contrasts. The second more general, but computationally intensive, method is based on 'node-splitting' which separates evidence on a particular comparison (node) into 'direct' and 'indirect' and can be applied to networks where trial-level data are available. Methods are illustrated with examples from the literature. We take a hierarchical Bayesian approach to MTC implemented using WinBUGS and R.We show that both methods are useful in identifying potential inconsistencies in different types of network and that they illustrate how the direct and indirect evidence combine to produce the posterior MTC estimates of relative treatment effects. This allows users to understand how MTC synthesis is pooling the data, and what is 'driving' the final estimates.We end with some considerations on the modelling assumptions being made, the problems with the extension of the back-calculation method to trial-level data and discuss our methods in the context of the existing literature.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Ranking treatments in frequentist network meta-analysis works without resampling methods

            Background Network meta-analysis is used to compare three or more treatments for the same condition. Within a Bayesian framework, for each treatment the probability of being best, or, more general, the probability that it has a certain rank can be derived from the posterior distributions of all treatments. The treatments can then be ranked by the surface under the cumulative ranking curve (SUCRA). For comparing treatments in a network meta-analysis, we propose a frequentist analogue to SUCRA which we call P-score that works without resampling. Methods P-scores are based solely on the point estimates and standard errors of the frequentist network meta-analysis estimates under normality assumption and can easily be calculated as means of one-sided p-values. They measure the mean extent of certainty that a treatment is better than the competing treatments. Results Using case studies of network meta-analysis in diabetes and depression, we demonstrate that the numerical values of SUCRA and P-Score are nearly identical. Conclusions Ranking treatments in frequentist network meta-analysis works without resampling. Like the SUCRA values, P-scores induce a ranking of all treatments that mostly follows that of the point estimates, but takes precision into account. However, neither SUCRA nor P-score offer a major advantage compared to looking at credible or confidence intervals. Electronic supplementary material The online version of this article (doi:10.1186/s12874-015-0060-8) contains supplementary material, which is available to authorized users.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Evidence Synthesis for Decision Making 4

              Inconsistency can be thought of as a conflict between “direct” evidence on a comparison between treatments B and C and “indirect” evidence gained from AC and AB trials. Like heterogeneity, inconsistency is caused by effect modifiers and specifically by an imbalance in the distribution of effect modifiers in the direct and indirect evidence. Defining inconsistency as a property of loops of evidence, the relation between inconsistency and heterogeneity and the difficulties created by multiarm trials are described. We set out an approach to assessing consistency in 3-treatment triangular networks and in larger circuit structures, its extension to certain special structures in which independent tests for inconsistencies can be created, and describe methods suitable for more complex networks. Sample WinBUGS code is given in an appendix. Steps that can be taken to minimize the risk of drawing incorrect conclusions from indirect comparisons and network meta-analysis are the same steps that will minimize heterogeneity in pairwise meta-analysis. Empirical indicators that can provide reassurance and the question of how to respond to inconsistency are also discussed.
                Bookmark

                Author and article information

                Contributors
                uhlmann@imbi.uni-heidelberg.de
                jensen@imbi.uni-heidelberg.de
                meinhard.kieser@imbi.uni-heidelberg.de
                Journal
                BMC Med Res Methodol
                BMC Med Res Methodol
                BMC Medical Research Methodology
                BioMed Central (London )
                1471-2288
                12 November 2018
                12 November 2018
                2018
                : 18
                : 128
                Affiliations
                ISNI 0000 0001 2190 4373, GRID grid.7700.0, Institute of Medical Biometry and Informatics, University of Heidelberg, ; Im Neuenheimer Feld 130.3, Heidelberg, Germany
                Author information
                http://orcid.org/0000-0001-8668-069X
                Article
                574
                10.1186/s12874-018-0574-y
                6233362
                30419827
                a253fac3-2833-47ab-926e-326ba3cb0fdf
                © The Author(s) 2018

                Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

                History
                : 13 July 2017
                : 15 October 2018
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100001659, Deutsche Forschungsgemeinschaft;
                Award ID: IN-1150438
                Funded by: FundRef http://dx.doi.org/10.13039/501100001661, Universität Heidelberg;
                Award ID: IN-1150438
                Categories
                Technical Advance
                Custom metadata
                © The Author(s) 2018

                Medicine
                network meta-analysis,hypothesis testing,treatment comparison,superiority,non-inferiority
                Medicine
                network meta-analysis, hypothesis testing, treatment comparison, superiority, non-inferiority

                Comments

                Comment on this article