11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      To Trust or to Think : Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

      1 , 2 , 1
      Proceedings of the ACM on Human-Computer Interaction
      Association for Computing Machinery (ACM)

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the AI decisions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Informed by the dual-process theory of cognition, we posit that people rarely engage analytically with each individual AI recommendation and explanation, and instead develop general heuristics about whether and when to follow the AI suggestions. Building on prior research on medical decision-making, we designed three cognitive forcing interventions to compel people to engage more thoughtfully with the AI-generated explanations. We conducted an experiment (N=199), in which we compared our three cognitive forcing designs to two simple explainable AI approaches and to a no-AI baseline. The results demonstrate that cognitive forcing significantly reduced overreliance compared to the simple explainable AI approaches. However, there was a trade-off: people assigned the least favorable subjective ratings to the designs that reduced the overreliance the most. To audit our work for intervention-generated inequalities, we investigated whether our interventions benefited equally people with different levels of Need for Cognition (i.e., motivation to engage in effortful mental activities). Our results show that, on average, cognitive forcing interventions benefited participants higher in Need for Cognition more. Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.

          Related collections

          Most cited references65

          • Record: found
          • Abstract: found
          • Article: not found

          Random effects structure for confirmatory hypothesis testing: Keep it maximal.

          Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F 1 and F 2 tests, and in many cases, even worse than F 1 alone. Maximal LMEMs should be the 'gold standard' for confirmatory hypothesis testing in psycholinguistics and beyond.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The efficient assessment of need for cognition.

            A short form for assessing individual differences in need for cognition is described.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A simple sequentially rejective multiple test procedure

                Bookmark

                Author and article information

                Journal
                Proceedings of the ACM on Human-Computer Interaction
                Proc. ACM Hum.-Comput. Interact.
                Association for Computing Machinery (ACM)
                2573-0142
                April 13 2021
                April 13 2021
                : 5
                : CSCW1
                : 1-21
                Affiliations
                [1 ]Harvard University, Cambridge, MA, USA
                [2 ]Institute of Applied Computer Science, Lodz, Poland
                Article
                10.1145/3449287
                36644216
                6313f8d9-b307-4ad1-9d59-661df2fd2364
                © 2021
                History

                Comments

                Comment on this article