42
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Performance of the inFLUenza Patient-Reported Outcome (FLU-PRO) diary in patients with influenza-like illness (ILI)

      Read this article at

      ScienceOpenPublisher
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background The inFLUenza Patient Reported Outcome (FLU-PRO) measure is a daily diary assessing signs/symptoms of influenza across six body systems: Nose, Throat, Eyes, Chest/Respiratory, Gastrointestinal, Body/Systemic, developed and tested in adults with influenza. Objectives This study tested the reliability, validity, and responsiveness of FLU-PRO scores in adults with influenza-like illness (ILI). Methods Data from the prospective, observational study used to develop and test the FLU-PRO in influenza virus positive patients were analyzed. Adults (≥18 years) presenting with influenza symptoms in outpatient settings in the US, UK, Mexico, and South America were enrolled, tested for influenza virus, and asked to complete the 37-item draft FLU-PRO daily for up to 14-days. Analyses were performed on data from patients testing negative. Reliability of the final, 32-item FLU-PRO was estimated using Cronbach’s alpha (α; Day 1) and intraclass correlation coefficients (ICC; 2-day reproducibility). Convergent and known-groups validity were assessed using patient global assessments of influenza severity (PGA). Patient report of return to usual health was used to assess responsiveness (Day 1–7). Results The analytical sample included 220 ILI patients (mean age = 39.3, 64.1% female, 88.6% white). Sixty-one (28%) were hospitalized at some point in their illness. Internal consistency reliability (α) of FLU-PRO Total score was 0.90 and ranged from 0.72–0.86 for domain scores. Reproducibility (Day 1–2) was 0.64 for Total, ranging from 0.46–0.78 for domain scores. Day 1 FLU-PRO scores correlated (≥0.30) with the PGA (except Gastrointestinal) and were significantly different across PGA severity groups (Total: F = 81.7, p<0.001; subscales: F = 6.9–62.2; p<0.01). Mean score improvements Day 1–7 were significantly greater in patients reporting return to usual health compared with those who did not (p<0.05, Total and subscales, except Gastrointestinal and Eyes). Conclusions Results suggest FLU-PRO scores are reliable, valid, and responsive in adults with influenza-like illness.

          Related collections

          Most cited references14

          • Record: found
          • Abstract: found
          • Article: not found

          Coefficient alpha and the internal structure of tests

          Psychometrika, 16(3), 297-334
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Use of existing patient-reported outcome (PRO) instruments and their modification: the ISPOR Good Research Practices for Evaluating and Documenting Content Validity for the Use of Existing Instruments and Their Modification PRO Task Force Report.

            Patient-reported outcome (PRO) instruments are used to evaluate the effect of medical products on how patients feel or function. This article presents the results of an ISPOR task force convened to address good clinical research practices for the use of existing or modified PRO instruments to support medical product labeling claims. The focus of the article is on content validity, with specific reference to existing or modified PRO instruments, because of the importance of content validity in selecting or modifying an existing PRO instrument and the lack of consensus in the research community regarding best practices for establishing and documenting this measurement property. Topics addressed in the article include: definition and general description of content validity; PRO concept identification as the important first step in establishing content validity; instrument identification and the initial review process; key issues in qualitative methodology; and potential threats to content validity, with three case examples used to illustrate types of threats and how they might be resolved. A table of steps used to identify and evaluate an existing PRO instrument is provided, and figures are used to illustrate the meaning of content validity in relationship to instrument development and evaluation. RESULTS & RECOMMENDATIONS: Four important threats to content validity are identified: unclear conceptual match between the PRO instrument and the intended claim, lack of direct patient input into PRO item content from the target population in which the claim is desired, no evidence that the most relevant and important item content is contained in the instrument, and lack of documentation to support modifications to the PRO instrument. In some cases, careful review of the threats to content validity in a specific application may be reduced through additional well documented qualitative studies that specifically address the issue of concern. Published evidence of the content validity of a PRO instrument for an intended application is often limited. Such evidence is, however, important to evaluating the adequacy of a PRO instrument for the intended application. This article provides an overview of key issues involved in assessing and documenting content validity as it relates to using existing instruments in the drug approval process.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              What is sufficient evidence for the reliability and validity of patient-reported outcome measures?

              This article focuses on the necessary psychometric properties of a patient-reported outcomes (PROs) measure. Topics include the importance of reliability and validity, psychometric approaches used to provide reliability and validity estimates, the kinds of evidence needed to indicate that a PRO has a sufficient level of reliability and validity, contexts that may affect psychometric properties, methods available to evaluate PRO instruments when the context varies, and types of reliability and validity testing that are appropriate during different phases of clinical trials. Points discussed include the perspective that the psychometric properties of reliability and validity are on a continuum in which the more evidence one has, the greater confidence there is in the value of the PRO data. Construct validity is the type of validity most frequently used with PRO instruments as few "gold standards" exist to allow the use of criterion validity and content validity by itself only provides beginning evidence of validity. Several guidelines are recommended for establishing sufficient evidence of reliability and validity. For clinical trials, a minimum reliability threshold of 0.70 is recommended. Sample sizes for testing should include at least 200 cases and results should be replicated in at least one additional sample. At least one full report on the development of the instrument and one on the use of the instrument are deemed necessary to evaluate the PRO psychometric properties. Psychometric testing ideally occurs before the initiation of Phase III trials. When testing does not occur prior to a Phase III trial, considerable risk is posed in relation to the ability to substantiate the use of the PRO data. Various qualitative (e.g., focus groups, behavioral coding, cognitive interviews) and quantitative approaches (e.g., differential item functioning testing) are useful in evaluating the reliability and validity of PRO instruments.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                PLOS ONE
                PLoS ONE
                Public Library of Science (PLoS)
                1932-6203
                March 22 2018
                March 22 2018
                : 13
                : 3
                : e0194180
                Article
                10.1371/journal.pone.0194180
                9c547d17-7a49-45d2-bae3-d5eed1b697a1
                © 2018

                https://creativecommons.org/publicdomain/zero/1.0/

                History

                Comments

                Comment on this article