5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Use of Medications for Treatment of Opioid Use Disorder Among US Medicaid Enrollees in 11 States, 2014-2018

      1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 2 , 2 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 2 , 11 , 7 , 12 , 2 , 13 , 11 , 14 , 12 , 15 , 16 , 17 , 18 , 19 , 20 , 16 , The Medicaid Outcomes Distributed Research Network (MODRN)
      JAMA
      American Medical Association (AMA)

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references26

          • Record: found
          • Abstract: found
          • Article: not found

          A basic introduction to fixed-effect and random-effects models for meta-analysis.

          There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Longitudinal data analysis using generalized linear models

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-Laird method

              Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p   = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes.
                Bookmark

                Author and article information

                Journal
                JAMA
                JAMA
                American Medical Association (AMA)
                0098-7484
                July 13 2021
                July 13 2021
                : 326
                : 2
                : 154
                Affiliations
                [1 ]The Medicaid Outcomes Distributed Research Network (MODRN)
                [2 ]Department of Health Policy and Management, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania
                [3 ]Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania
                [4 ]Public Health Program, Muskie School of Public Service, University of Southern Maine, Portland
                [5 ]Health Policy, Management, and Leadership Department, School of Public Health, West Virginia University, Morgantown
                [6 ]Department of Maternal and Child Health, Gillings School of Global Public Health, University of North Carolina at Chapel Hill
                [7 ]Department of Health Behavior and Policy, School of Medicine, Virginia Commonwealth University, Richmond
                [8 ]Department of Population Health Sciences, School of Medicine and Public Health, University of Wisconsin, Madison
                [9 ]Department of Medicine, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
                [10 ]Department of Pediatrics, University of Michigan Medical School, Ann Arbor
                [11 ]Ohio Colleges of Medicine Government Resource Center, The Ohio State University, Columbus
                [12 ]The Hilltop Institute, University of Maryland Baltimore County, Baltimore
                [13 ]School of Social Work, University of North Carolina at Chapel Hill
                [14 ]Center for Community Research & Service, Biden School of Public Policy and Administration, University of Delaware, Newark
                [15 ]Health Sciences Center, School of Public Health, Health Affairs Department, School of Public Health, West Virginia University, Morgantown
                [16 ]AcademyHealth, Washington, DC
                [17 ]Division of Biomedical Informatics, College of Medicine, University of Kentucky, Lexington
                [18 ]Department of Psychiatry, University of Michigan Medical School, Ann Arbor
                [19 ]Department of Medicine and Department of Psychiatry, University of Utah School of Medicine, Salt Lake City
                [20 ]Informatics, Decision-Enhancement, and Analytic Sciences (IDEAS) Center, VA Salt Lake City Health Care System, Salt Lake City
                Article
                10.1001/jama.2021.7374
                34255008
                c179e087-a76c-4b8f-a5a8-61d71f8b039e
                © 2021
                History

                Comments

                Comment on this article