42
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Decision boxes for clinicians to support evidence-based practice and shared decision making: the user experience

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          This project engages patients and physicians in the development of Decision Boxes, short clinical topic summaries covering medical questions that have no single best answer. Decision Boxes aim to prepare the clinician to communicate the risks and benefits of the available options to the patient so they can make an informed decision together.

          Methods

          Seven researchers (including four practicing family physicians) selected 10 clinical topics relevant to primary care practice through a Delphi survey. We then developed two one-page prototypes on two of these topics: prostate cancer screening with the prostate-specific antigen test, and prenatal screening for trisomy 21 with the serum integrated test. We presented the prototypes to purposeful samples of family physicians distributed in two focus groups, and patients distributed in four focus groups. We used the User Experience Honeycomb to explore barriers and facilitators to the communication design used in Decision Boxes. All discussions were transcribed, and three researchers proceeded to thematic content analysis of the transcriptions. The coding scheme was first developed from the Honeycomb’s seven themes (valuable, usable, credible, useful, desirable, accessible, and findable), and included new themes suggested by the data. Prototypes were modified in light of our findings.

          Results

          Three rounds were necessary for a majority of researchers to select 10 clinical topics. Fifteen physicians and 33 patients participated in the focus groups. Following analyses, three sections were added to the Decision Boxes: introduction, patient counseling, and references. The information was spread to two pages to try to make the Decision Boxes less busy and improve users’ first impression. To try to improve credibility, we gave more visibility to the research institutions involved in development. A statement on the boxes’ purpose and a flow chart representing the shared decision-making process were added with the intent of clarifying the tool’s purpose. Information about the risks and benefits according to risk levels was added to the Decision Boxes, to try to ease the adaptation of the information to individual patients.

          Conclusion

          Results will guide the development of the eight remaining Decision Boxes. A future study will evaluate the effect of Decision Boxes on the integration of evidence-based and shared decision making principles in clinical practice.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: found
          • Article: not found

          Validation of a decisional conflict scale.

          The study objective was to evaluate the psychometric properties of a decisional conflict scale (DCS) that elicits: 1) health-care consumers' uncertainty in making a health-related decision; 2) the factors contributing to the uncertainty; and 3) health-care consumers' perceived effective decision making. The DCS was developed in response to the lack of instruments available to evaluate health-care-consumer decision aids and to tailor decision-supporting interventions to particular consumer needs. The scale was evaluated with 909 individuals deciding about influenza immunization or breast cancer screening. A subsample of respondents was retested two weeks later. The test-retest reliability coefficient was 0.81. Internal consistency coefficients ranged from 0.78 to 0.92. The DCS discriminated significantly (p < 0.0002) between those who had strong intentions either to accept or to decline invitations to receive influenza vaccine or breast cancer screening and those whose intentions were uncertain. The scale also discriminated significantly (p < 0.0002) between those who accepted or rejected immunization and those who delayed their decisions to be immunized. There was a weak inverse correlation (r = -0.16, p < 0.05) between the DCS and knowledge test scores. The psychometric properties of the scale are acceptable. It is feasible and easy to administer. Evaluations of responsiveness to change and validation with more difficult decisions are warranted.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)

            Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Believe it or not: Factors influencing credibility on the Web

                Bookmark

                Author and article information

                Journal
                Implement Sci
                Implement Sci
                Implementation Science : IS
                BioMed Central
                1748-5908
                2012
                3 August 2012
                : 7
                : 72
                Affiliations
                [1 ]Health Information Research Unit, Department of Clinical Epidemiology and Biostatistics, McMaster University, CRL-139 1280 Main Street, West Hamilton, ON L8S 4 K1, Canada
                [2 ]Research Center of the CHUQ, Saint-Francois d’Assise Hospital, 10 rue de l’Espinay, D6-730, Quebec City, (QC), G1L 3 L5, Canada
                [3 ]Department of Family Medicine, McGill University, 515-517 Pine Avenue West, Montreal, (QC), H2W lS4, Canada
                [4 ]Department of Clinical Epidemiology and Biostatistics and Department of Medicine, DeGroote School of Medicine, McMaster University, 1280 Main Street West CRL-125, Hamilton, ON, L8S 4 K1, Canada
                [5 ]Department of Family Medicine and Emergency Medicine, University Laval, 1050 avenue de la Médecine, Room#4617, Quebec, (QC), G1V 0A6, Canada
                [6 ]Universidad del Valle, Calle 4B No. 36 – 00 Edificio 100 1er piso, Cali, Colombia
                Article
                1748-5908-7-72
                10.1186/1748-5908-7-72
                3533695
                22862935
                3804a756-ad7f-4f4e-a8ca-fc17e1b77d2b
                Copyright ©2012 Giguere et al.; licensee BioMed Central Ltd.

                This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 5 December 2011
                : 25 June 2012
                Categories
                Research

                Medicine
                usability,patient-centered care,communication design,clinical topic summary,evidence-based medicine,knowledge translation,counselling,decision support,risk communication,user experience

                Comments

                Comment on this article