13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Chatbots for HIV Prevention and Care: a Narrative Review

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose of Review

          To explore the intersection of chatbots and HIV prevention and care. Current applications of chatbots in HIV services, the challenges faced, recent advancements, and future research directions are presented and discussed.

          Recent Findings

          Chatbots facilitate sensitive discussions about HIV thereby promoting prevention and care strategies. Trustworthiness and accuracy of information were identified as primary factors influencing user engagement with chatbots. Additionally, the integration of AI-driven models that process and generate human-like text into chatbots poses both breakthroughs and challenges in terms of privacy, bias, resources, and ethical issues.

          Summary

          Chatbots in HIV prevention and care show potential; however, significant work remains in addressing associated ethical and practical concerns. The integration of large language models into chatbots is a promising future direction for their effective deployment in HIV services. Encouraging future research, collaboration among stakeholders, and bold innovative thinking will be pivotal in harnessing the full potential of chatbot interventions.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Large language models encode clinical knowledge

          Large language models (LLMs) have demonstrated impressive capabilities, but the bar for clinical applications is high. Attempts to assess the clinical knowledge of models typically rely on automated evaluations based on limited benchmarks. Here, to address these limitations, we present MultiMedQA, a benchmark combining six existing medical question answering datasets spanning professional medicine, research and consumer queries and a new dataset of medical questions searched online, HealthSearchQA. We propose a human evaluation framework for model answers along multiple axes including factuality, comprehension, reasoning, possible harm and bias. In addition, we evaluate Pathways Language Model 1 (PaLM, a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM 2 on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA 3 , MedMCQA 4 , PubMedQA 5 and Measuring Massive Multitask Language Understanding (MMLU) clinical topics 6 ), including 67.6% accuracy on MedQA (US Medical Licensing Exam-style questions), surpassing the prior state of the art by more than 17%. However, human evaluation reveals key gaps. To resolve this, we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, knowledge recall and reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal limitations of today’s models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLMs for clinical applications. Med-PaLM, a state-of-the-art large language model for medicine, is introduced and evaluated across several medical question answering tasks, demonstrating the promise of these models in this domain.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            The Role of Large Language Models in Medical Education: Applications and Implications

            Large language models (LLMs) such as ChatGPT have sparked extensive discourse within the medical education community, spurring both excitement and apprehension. Written from the perspective of medical students, this editorial offers insights gleaned through immersive interactions with ChatGPT, contextualized by ongoing research into the imminent role of LLMs in health care. Three distinct positive use cases for ChatGPT were identified: facilitating differential diagnosis brainstorming, providing interactive practice cases, and aiding in multiple-choice question review. These use cases can effectively help students learn foundational medical knowledge during the preclinical curriculum while reinforcing the learning of core Entrustable Professional Activities. Simultaneously, we highlight key limitations of LLMs in medical education, including their insufficient ability to teach the integration of contextual and external information, comprehend sensory and nonverbal cues, cultivate rapport and interpersonal interaction, and align with overarching medical education and patient care goals. Through interacting with LLMs to augment learning during medical school, students can gain an understanding of their strengths and weaknesses. This understanding will be pivotal as we navigate a health care landscape increasingly intertwined with LLMs and artificial intelligence.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The Development and Use of Chatbots in Public Health: Scoping Review

              Background Chatbots are computer programs that present a conversation-like interface through which people can access information and services. The COVID-19 pandemic has driven a substantial increase in the use of chatbots to support and complement traditional health care systems. However, despite the uptake in their use, evidence to support the development and deployment of chatbots in public health remains limited. Recent reviews have focused on the use of chatbots during the COVID-19 pandemic and the use of conversational agents in health care more generally. This paper complements this research and addresses a gap in the literature by assessing the breadth and scope of research evidence for the use of chatbots across the domain of public health. Objective This scoping review had 3 main objectives: (1) to identify the application domains in public health in which there is the most evidence for the development and use of chatbots; (2) to identify the types of chatbots that are being deployed in these domains; and (3) to ascertain the methods and methodologies by which chatbots are being evaluated in public health applications. This paper explored the implications for future research on the development and deployment of chatbots in public health in light of the analysis of the evidence for their use. Methods Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines for scoping reviews, relevant studies were identified through searches conducted in the MEDLINE, PubMed, Scopus, Cochrane Central Register of Controlled Trials, IEEE Xplore, ACM Digital Library, and Open Grey databases from mid-June to August 2021. Studies were included if they used or evaluated chatbots for the purpose of prevention or intervention and for which the evidence showed a demonstrable health impact. Results Of the 1506 studies identified, 32 were included in the review. The results show a substantial increase in the interest of chatbots in the past few years, shortly before the pandemic. Half (16/32, 50%) of the research evaluated chatbots applied to mental health or COVID-19. The studies suggest promise in the application of chatbots, especially to easily automated and repetitive tasks, but overall, the evidence for the efficacy of chatbots for prevention and intervention across all domains is limited at present. Conclusions More research is needed to fully understand the effectiveness of using chatbots in public health. Concerns with the clinical, legal, and ethical aspects of the use of chatbots for health care are well founded given the speed with which they have been adopted in practice. Future research on their use should address these concerns through the development of expertise and best practices specific to public health, including a greater focus on user experience.
                Bookmark

                Author and article information

                Contributors
                avanheerden@hsrc.ac.za
                Journal
                Curr HIV/AIDS Rep
                Curr HIV/AIDS Rep
                Current HIV/AIDS Reports
                Springer US (New York )
                1548-3568
                1548-3576
                27 November 2023
                27 November 2023
                2023
                : 20
                : 6
                : 481-486
                Affiliations
                [1 ]Center for Community Based Research, Human Sciences Research Council, Old Bus Depot, ( https://ror.org/056206b04) Pietermaritzburg, 3201 South Africa
                [2 ]SAMRC/WITS Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, ( https://ror.org/03rp50x72) Johannesburg, Gauteng South Africa
                [3 ]GRID grid.19006.3e, ISNI 0000 0000 9632 6718, Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, , University of California, ; Los Angeles, CA USA
                [4 ]GRID grid.19006.3e, ISNI 0000 0000 9632 6718, Center for Community Health, Semel Institute for Neuroscience and Human Behavior, , University of California, ; Los Angeles, CA USA
                [5 ]GRID grid.19006.3e, ISNI 0000 0000 9632 6718, Department of Health Policy and Management, Fielding School of Public Health, , University of California, ; Los Angeles, CA USA
                Article
                681
                10.1007/s11904-023-00681-x
                10719151
                38010467
                819edd62-4c30-476b-8871-83aa3f61980d
                © The Author(s) 2023

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 6 November 2023
                Funding
                Funded by: Human Sciences Research Council
                Categories
                Article
                Custom metadata
                © Springer Science+Business Media, LLC, part of Springer Nature 2023

                Infectious disease & Microbiology
                hiv prevention,chatbots,conversational agents,large language models,generative ai

                Comments

                Comment on this article