186
views
0
recommends
+1 Recommend
1 collections
    1
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Importance

          The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.

          Objective

          To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

          Design, Setting, and Participants

          In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” ( very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” ( not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

          Results

          Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses ( t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses ( t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

          Conclusions

          In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

          Related collections

          Most cited references16

          • Record: found
          • Abstract: not found
          • Article: not found

          Big data. The parable of Google Flu: traps in big data analysis.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            What can digital disease detection learn from (an external revision to) Google Flu Trends?

            Google Flu Trends (GFT) claimed to generate real-time, valid predictions of population influenza-like illness (ILI) using search queries, heralding acclaim and replication across public health. However, recent studies have questioned the validity of GFT.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Changes in Burnout and Satisfaction With Work-Life Integration in Physicians Over the First 2 Years of the COVID-19 Pandemic

              Objective To evaluate the prevalence of burnout and satisfaction with work-life integration (WLI) in US physicians at the end of 2021, roughly 21 months into the COVID-19 pandemic, with comparison to 2020, 2017, 2014, and 2011. Methods Between December 9, 2021, and January 24, 2022, we surveyed US physicians using methods similar to our prior studies. Burnout, WLI, depression, and professional fulfillment were assessed using standard instruments. Results There were 2440 physicians who participated in the 2021 survey. Mean emotional exhaustion and depersonalization scores were higher in 2021 than observed in 2020, 2017, 2014 and 2011 (all p<.001). Mean emotional exhaustion scores increased 38.6% (2020 mean=21.0; 2021 mean=29.1; p<.001) while mean depersonalization scores increased 60.7% (2020 mean=6.1; 2021 mean=9.8; p<.001). Overall, 62.8% of physicians had at least one manifestation of burnout in 2021 compared with 38.2% in 2020, 43.9% in 2017, 54.4% in 2014, and 45.5% in 2011 (all P<.001). While these trends were consistent across nearly all specialties, substantial variability by specialty was observed. Satisfaction with WLI declined from 46.1% in 2020 to 30.2% in 2021 (P<.001). Mean scores for depression increased 6.1% (2020 mean=49.54; 2021 mean=52.59; p<.001). Conclusion A dramatic increase in burnout and decrease in satisfaction with WLI occurred in US physicians between 2020 and 2021. Differences in mean depression scores were modest suggesting the increase in physician distress was overwhelmingly work-related. Given the association of physician burnout with quality of care, turnover, and reductions in work effort, these findings have profound implications for the US healthcare system.
                Bookmark

                Author and article information

                Journal
                JAMA Internal Medicine
                JAMA Intern Med
                American Medical Association (AMA)
                2168-6106
                April 28 2023
                Affiliations
                [1 ]Qualcomm Institute, University of California San Diego, La Jolla
                [2 ]Division of Infectious Diseases and Global Public Health, Department of Medicine, University of California San Diego, La Jolla
                [3 ]Department of Computer Science, Bryn Mawr College, Bryn Mawr, Pennsylvania
                [4 ]Department of Computer Science, Johns Hopkins University, Baltimore, Maryland
                [5 ]Herbert Wertheim School of Public Health and Human Longevity Science, University of California San Diego, La Jolla
                [6 ]Human Longevity, La Jolla, California
                [7 ]Naval Health Research Center, Navy, San Diego, California
                [8 ]Division of Blood and Marrow Transplantation, Department of Medicine, University of California San Diego, La Jolla
                [9 ]Moores Cancer Center, University of California San Diego, La Jolla
                [10 ]Department of Biomedical Informatics, University of California San Diego, La Jolla
                [11 ]Altman Clinical Translational Research Institute, University of California San Diego, La Jolla
                Article
                10.1001/jamainternmed.2023.1838
                10148230
                37115527
                cfd4fb88-4056-47e2-b1fb-97a52c5d4d4a
                © 2023
                History

                Comments

                Comment on this article