11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      What Are the MCIDs for PROMIS, NDI, and ODI Instruments Among Patients With Spinal Conditions?

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references50

          • Record: found
          • Abstract: found
          • Article: not found

          The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005-2008.

          Patient-reported outcomes (PROs) are essential when evaluating many new treatments in health care; yet, current measures have been limited by a lack of precision, standardization, and comparability of scores across studies and diseases. The Patient-Reported Outcomes Measurement Information System (PROMIS) provides item banks that offer the potential for efficient (minimizes item number without compromising reliability), flexible (enables optional use of interchangeable items), and precise (has minimal error in estimate) measurement of commonly studied PROs. We report results from the first large-scale testing of PROMIS items. Fourteen item pools were tested in the U.S. general population and clinical groups using an online panel and clinic recruitment. A scale-setting subsample was created reflecting demographics proportional to the 2000 U.S. census. Using item-response theory (graded response model), 11 item banks were calibrated on a sample of 21,133, measuring components of self-reported physical, mental, and social health, along with a 10-item Global Health Scale. Short forms from each bank were developed and compared with the overall bank and with other well-validated and widely accepted ("legacy") measures. All item banks demonstrated good reliability across most of the score distributions. Construct validity was supported by moderate to strong correlations with legacy measures. PROMIS item banks and their short forms provide evidence that they are reliable and precise measures of generic symptoms and functional reports comparable to legacy instruments. Further testing will continue to validate and test PROMIS items and banks in diverse clinical populations. Copyright © 2010 Elsevier Inc. All rights reserved.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes.

            The objective of this review is to summarize recommendations on methods for evaluating responsiveness and minimal important difference (MID) for patient-reported outcome (PRO) measures. We review, summarize, and integrate information on issues and methods for evaluating responsiveness and determining MID estimates for PRO measures. Recommendations are made on best-practice methods for evaluating responsiveness and MID. The MID for a PRO instrument is not an immutable characteristic, but may vary by population and context, and no one MID may be valid for all study applications. MID estimates should be based on multiple approaches and triangulation of methods. Anchor-based methods applying various relevant patient-rated, clinician-rated, and disease-specific variables provide primary and meaningful estimates of an instrument's MID. Results for the PRO measures from clinical trials can also provide insight into observed effects based on treatment comparisons and should be used to help determine MID. Distribution-based methods can support estimates from anchor-based approaches and can be used in situations where anchor-based estimates are unavailable. We recommend that the MID is based primarily on relevant patient-based and clinical anchors, with clinical trial experience used to further inform understanding of MID.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed.

              Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need for rigorously conducted interrater and intrarater reliability and agreement studies. Information about sample selection, study design, and statistical analysis is often incomplete. Because of inadequate reporting, interpretation and synthesis of study results are often difficult. Widely accepted criteria, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies. Eight experts in reliability and agreement investigation developed guidelines for reporting. Fifteen issues that should be addressed when reliability and agreement are reported are proposed. The issues correspond to the headings usually used in publications. The proposed guidelines intend to improve the quality of reporting. Copyright © 2011 Elsevier Inc. All rights reserved.
                Bookmark

                Author and article information

                Journal
                Clinical Orthopaedics & Related Research
                Clin Orthop Relat Res
                Ovid Technologies (Wolters Kluwer Health)
                0009-921X
                2018
                September 4 2018
                October 2018
                : 476
                : 10
                : 2027-2036
                Article
                10.1097/CORR.0000000000000419
                30179950
                14fdfc10-f8ed-45c1-b12e-35bc67ee5255
                © 2018
                History

                Comments

                Comment on this article