141
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)

      research-article
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objectives

          To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids).

          Design

          Scale development study, involving construct, item and scale development, validation and reliability testing.

          Setting

          There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation.

          Methods

          Scale development study, involving construct, item and scale development, validation and reliability testing.

          Participants

          Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies.

          Results

          IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92).

          Conclusions

          This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.

          Related collections

          Most cited references16

          • Record: found
          • Abstract: found
          • Article: not found

          Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project.

          (2003)
          International interest in clinical practice guidelines has never been greater but many published guidelines do not meet the basic quality requirements. There have been renewed calls for validated criteria to assess the quality of guidelines. To develop and validate an international instrument for assessing the quality of the process and reporting of clinical practice guideline development. The instrument was developed through a multi-staged process of item generation, selection and scaling, field testing, and refinement procedures. 100 guidelines selected from 11 participating countries were evaluated independently by 194 appraisers with the instrument. Following refinement the instrument was further field tested on three guidelines per country by a new set of 70 appraisers. The final version of the instrument contained 23 items grouped into six quality domains with a 4 point Likert scale to score each item (scope and purpose, stakeholder involvement, rigour of development, clarity and presentation, applicability, editorial independence). 95% of appraisers found the instrument useful for assessing guidelines. Reliability was acceptable for most domains (Cronbach's alpha 0.64-0.88). Guidelines produced as part of an established guideline programme had significantly higher scores on editorial independence and, after the publication of a national policy, had significantly higher quality scores on rigour of development (p<0.005). Guidelines with technical documentation had higher scores on that domain (p<0.0001). This is the first time an appraisal instrument for clinical practice guidelines has been developed and tested internationally. The instrument is sensitive to differences in important aspects of guidelines and can be used consistently and easily by a wide range of professionals from different backgrounds. The adoption of common standards should improve the consistency and quality of the reporting of guideline development worldwide and provide a framework to encourage international comparison of clinical practice guidelines.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Do patient decision aids meet effectiveness criteria of the international patient decision aid standards collaboration? A systematic review and meta-analysis.

            To describe the extent to which patient decision aids (PtDAs) meet effectiveness standards of the International Patient Decision Aids Collaboration (IPDAS). Five electronic databases (to July 2006) and personal contacts (to December 2006). Among 55 randomized controlled trials, 38 (69%) used at least 1 measure that mapped onto an IPDAS effectiveness criterion. Measures of decision quality were knowledge scores (27 trials), accurate risk perceptions (12 trials), and value congruence with the chosen option (3 trials). PtDAs improved knowledge scores relative to usual care (weighted mean difference [WMD] = 15.2%, 95% confidence interval [CI] = 11.7 to 18.7); detailed PtDAs were somewhat more effective than simpler PtDAs (WMD = 4.6%, 95% CI = 3.0 to 6.2). PtDAs with probabilities improved accurate risk perceptions relative to those without probabilities (relative risk = 1.6, 95% CI = 1.4 to 1.9). Relative to simpler PtDAs, detailed PtDAs improved value congruence with the chosen option. Only 2 of 6 IPDAS decision process criteria were measured: feeling informed (15 trials) and feeling clear about values (13 trials). PtDAs improved these process measures relative to usual care (feeling uninformed WMD = -8.4, 95% CI = -11.9 to -4.8; unclear values WMD = -6.3, 95% CI = -10.0 to -2.7). There was no difference in process measures when detailed and simple PtDAs were compared. PtDAs improve decision quality and the decision process's measures of feeling informed and clear about values; however, the size of the effect varies across studies. Several IPDAS decision process measures have not been used. Future trials need to use a minimum data set of IPDAS evaluation measures. The degree of detail PtDAs require for positive effects on IPDAS criteria should be explored.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Where is the theory? Evaluating the theoretical frameworks described in decision support technologies.

              To identify and describe the extent to which theory or theoretical frameworks informed the development and evaluation of decision support technologies (DSTs). The analysis was based on the decision technologies used in studies included in the Cochrane systematic review of patient decision aids for people facing health screening or treatment decisions. The assumption was made that DSTs evaluated by randomized controlled trials, and therefore included in the updated Cochrane review have been the most rigorously developed. Of the 50 DSTs evaluated only 17 (34%) were based on a theoretical framework. Amongst these, 11 decision-making theories were described but the extent to which theory informed the development, field-testing and evaluation of these interventions was highly variable between DSTs. The majority of the 17 DSTs that relied on a theory was not explicit about how theory had guided their design and evaluation. Many had superficial descriptions of the theory or theories involved. Furthermore, based on the analysis of those 17 DSTs, none had reported field-testing prior to evaluation. The use of decision-making theory in DST development is rare and poorly described. The lack of theoretical underpinning to the design and development of DSTs most likely reflects the early development stage of the DST field. The findings clearly indicate the need to give more attention to how the most important decision-making theories could be better used to guide the design of key decision support components and their modes of action.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                1932-6203
                2009
                4 March 2009
                : 4
                : 3
                : e4705
                Affiliations
                [1 ]Department of Primary Care and Public Health, School of Medicine and the School of Psychology, Cardiff University, Cardiff, United Kingdom
                [2 ]Ottawa Health Research Institute, University of Ottawa, Ottawa, Ontario, Canada
                [3 ]School of Nursing, University of Ottawa, Ottawa, Ontario, Canada
                [4 ]W. Alpert Medical School, Brown University, Centers for Behavioural and Preventive Medicine, Providence, Rhode Island
                [5 ]Department of Internal Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
                [6 ]Maine Medical Center, Center for Outcomes Research and Evaluation, Portland, Maine, United States of America
                [7 ]Picker Institute Europe, King's Mead House, Oxford, United Kingdom
                [8 ]John M. Eisenberg Clinical Decisions and Communications Science Center, Department of Medical Informatics and Clinical Epidemiology, Oregon Health&Science University, Portland, Oregon, United States of America
                [9 ]Institute and Policlinic for Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
                [10 ]Center for Ethics, College of Human Medicine, Michigan State University, East Lansing, Michigan, United States of America
                [11 ]Centre Léon Bérard, University of Lyon, Lyon, France
                [12 ]Institute of Health and Society, Medical School, Framlington Place, University of Newcastle, Newcastle upon Tyne, United Kingdom
                [13 ]Department of Oncology, McMaster University, Juravinski Cancer Centre, Hamilton Ontario, Canada
                [14 ]Department General Practice, School for Public Health and Primary Care (CAPHRI), Maastricht University, Maastricht, the Netherlands
                University of California San Francisco, United States of America
                Author notes

                Conceived and designed the experiments: GE AO AE. Performed the experiments: GE AO CB MAD ED NJ SK AS SS MS. Analyzed the data: GE CB RGN MP MAD ED NJ SK AS SS MS. Contributed reagents/materials/analysis tools: GE. Wrote the paper: GE AO CB RGN MP MAD ED NJ SK AS SS MS SJB NC AC KE MH MHR NM DS RT TW TvdW AE.

                Article
                08-PONE-RA-05459R1
                10.1371/journal.pone.0004705
                2649534
                19259269
                c935f567-968f-4191-a933-ef6f05b2730c
                Elwyn et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
                History
                : 10 July 2008
                : 28 October 2008
                Page count
                Pages: 9
                Categories
                Research Article
                Evidence-Based Healthcare
                Evidence-Based Healthcare/Bedside Evidence-Based Medicine
                Evidence-Based Healthcare/Health Services Research and Economics

                Uncategorized
                Uncategorized

                Comments

                Comment on this article