0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.

          Related collections

          Most cited references52

          • Record: found
          • Abstract: found
          • Article: not found

          Dissecting racial bias in an algorithm used to manage the health of populations

          Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites.

            Black Americans are systematically undertreated for pain relative to white Americans. We examine whether this racial bias is related to false beliefs about biological differences between blacks and whites (e.g., "black people's skin is thicker than white people's skin"). Study 1 documented these beliefs among white laypersons and revealed that participants who more strongly endorsed false beliefs about biological differences reported lower pain ratings for a black (vs. white) target. Study 2 extended these findings to the medical context and found that half of a sample of white medical students and residents endorsed these beliefs. Moreover, participants who endorsed these beliefs rated the black (vs. white) patient's pain as lower and made less accurate treatment recommendations. Participants who did not endorse these beliefs rated the black (vs. white) patient's pain as higher, but showed no bias in treatment recommendations. These findings suggest that individuals with at least some medical training hold and may use false beliefs about biological differences between blacks and whites to inform medical judgments, which may contribute to racial disparities in pain assessment and treatment.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

              Importance The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians. Objective To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions. Design, Setting, and Participants In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” ( very poor , poor , acceptable , good , or very good ) and “the empathy or bedside manner provided” ( not empathetic , slightly empathetic , moderately empathetic , empathetic , and very empathetic ). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians. Results Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses ( t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses ( t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot. Conclusions In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
                Bookmark

                Author and article information

                Contributors
                yanshan.wang@pitt.edu
                Journal
                NPJ Digit Med
                NPJ Digit Med
                NPJ Digital Medicine
                Nature Publishing Group UK (London )
                2398-6352
                2 December 2023
                2 December 2023
                2023
                : 6
                : 225
                Affiliations
                [1 ]Department of Health Information Management, University of Pittsburgh, ( https://ror.org/01an3r305) Pittsburgh, PA USA
                [2 ]Department of Population Health Sciences, Weill Cornell Medicine, ( https://ror.org/02r109517) New York, NY USA
                [3 ]Division of Pulmonary, Allergy, Critical Care & Sleep Medicine, University of Pittsburgh, ( https://ror.org/01an3r305) Pittsburgh, PA USA
                [4 ]Center for Military Medicine Research, University of Pittsburgh, ( https://ror.org/01an3r305) Pittsburgh, PA USA
                [5 ]Telemedicine & Advanced Technology Research Center, US Army, Fort Detrick, ( https://ror.org/014pvr265) Frederick, MD USA
                [6 ]GRID grid.265436.0, ISNI 0000 0001 0421 5525, Department of Surgery, , Uniformed Services University, ; Bethesda, MD USA
                [7 ]Virtual Medical Center, Brooke Army Medical Center, ( https://ror.org/00m1mwc36) San Antonio, TX USA
                [8 ]Intelligent Systems Program, University of Pittsburgh, ( https://ror.org/01an3r305) Pittsburgh, PA USA
                [9 ]Department of Biomedical Informatics, University of Pittsburgh, ( https://ror.org/01an3r305) Pittsburgh, PA USA
                [10 ]GRID grid.21925.3d, ISNI 0000 0004 1936 9000, Clinical and Translational Science Institute, , University of Pittsburgh, ; Pittsburgh, PA USA
                [11 ]GRID grid.412689.0, ISNI 0000 0001 0650 7433, University of Pittsburgh Medical Center, ; Pittsburgh, PA USA
                Author information
                http://orcid.org/0000-0002-9221-3059
                http://orcid.org/0000-0001-9309-8331
                http://orcid.org/0000-0003-3857-8417
                http://orcid.org/0000-0002-0878-1484
                http://orcid.org/0000-0003-4433-7839
                Article
                965
                10.1038/s41746-023-00965-x
                10693640
                38042910
                17cc6cf8-8527-47e0-8cd2-392b7bc2738a
                © The Author(s) 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 18 May 2023
                : 15 November 2023
                Funding
                Funded by: FundRef 100011359, Pitt | School of Health and Rehabilitation Sciences, University of Pittsburgh (University of Pittsburgh School of Health & Rehabilitation Sciences);
                Funded by: FundRef 100000002, U.S. Department of Health & Human Services | National Institutes of Health (NIH);
                Award ID: R01LM014306
                Funded by: FundRef 100006108, U.S. Department of Health & Human Services | NIH | National Center for Advancing Translational Sciences (NCATS);
                Award ID: U24TR004111
                Funded by: FundRef 100000002, U.S. Department of Health & Human Services | National Institutes of Health (NIH);
                Award ID: R01LM014306
                Funded by: FundRef 100000002, U.S. Department of Health & Human Services | National Institutes of Health (NIH);
                Award ID: 4R00LM013001
                Funded by: FundRef 100000001, National Science Foundation (NSF);
                Award ID: 2145640
                Categories
                Perspective
                Custom metadata
                © Springer Nature Limited 2023

                translational research,health care
                translational research, health care

                Comments

                Comment on this article