Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.

          Related collections

          Most cited references45

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns

          ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Participation in cancer clinical trials: race-, sex-, and age-based disparities.

            Despite the importance of diversity of cancer trial participants with regard to race, ethnicity, age, and sex, there is little recent information about the representation of these groups in clinical trials. To characterize the representation of racial and ethnic minorities, the elderly, and women in cancer trials sponsored by the National Cancer Institute. Cross-sectional population-based analysis of all participants in therapeutic nonsurgical National Cancer Institute Clinical Trial Cooperative Group breast, colorectal, lung, and prostate cancer clinical trials in 2000 through 2002. In a separate analysis, the ethnic distribution of patients enrolled in 2000 through 2002 was compared with those enrolled in 1996 through 1998, using logistic regression models to estimate the relative risk ratio of enrollment for racial and ethnic minorities to that of white patients during these time periods. Enrollment fraction, defined as the number of trial enrollees divided by the estimated US cancer cases in each race and age subgroup. Cancer research participation varied significantly across racial/ethnic and age groups. Compared with a 1.8% enrollment fraction among white patients, lower enrollment fractions were noted in Hispanic (1.3%; odds ratio [OR] vs whites, 0.72; 95% confidence interval [CI], 0.68-0.77; P<.001) and black (1.3%; OR, 0.71; 95% CI, 0.68-0.74; P<.001) patients. There was a strong relationship between age and enrollment fraction, with trial participants 30 to 64 years of age representing 3.0% of incident cancer patients in that age group, in comparison to 1.3% of 65- to 74-year-old patients and 0.5% of patients 75 years of age and older. This inverse relationship between age and trial enrollment fraction was consistent across racial and ethnic groups. Although the total number of trial participants increased during our study period, the representation of racial and ethnic minorities decreased. In comparison to whites, after adjusting for age, cancer type, and sex, patients enrolled in 2000 through 2002 were 24% less likely to be black (adjusted relative risk ratio, 0.76; 95% CI, 0.65-0.89; P<.001). Men were more likely than women to enroll in colorectal cancer trials (enrollment fractions: 2.1% vs 1.6%, respectively; OR, 1.30; 95% CI, 1.24-1.35; P<.001) and lung cancer trials (enrollment fractions: 0.9% vs 0.7%, respectively; OR, 1.23; 95% CI, 1.16-1.31; P<.001). Enrollment in cancer trials is low for all patient groups. Racial and ethnic minorities, women, and the elderly were less likely to enroll in cooperative group cancer trials than were whites, men, and younger patients, respectively. The proportion of trial participants who are black has declined in recent years.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

              Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
                Bookmark

                Author and article information

                Journal
                Cureus
                Cureus
                2168-8184
                Cureus
                Cureus (Palo Alto (CA) )
                2168-8184
                10 August 2023
                August 2023
                : 15
                : 8
                : e43262
                Affiliations
                [1 ] Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
                [2 ] Orthopedics, Government Medical College, Omandurar Government Estate, Chennai, IND
                [3 ] Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
                Author notes
                Article
                10.7759/cureus.43262
                10492220
                37692617
                0453659c-296a-471b-b5ff-8cd4846994af
                Copyright © 2023, Jeyaraman et al.

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 10 August 2023
                Categories
                Public Health
                Healthcare Technology
                Epidemiology/Public Health

                secure multiparty computation,homo-morphic encryption,healthcare,large language models,chatgpt,artificial intelligence (ai)

                Comments

                Comment on this article