1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Artificial Intelligence as a Triage Tool during the Perioperative Period: Pilot Study of Accuracy and Accessibility for Clinical Application

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background:

          Given the dialogistic properties of ChatGPT, we hypothesized that this artificial intelligence (AI) function can be used as a self-service tool where clinical questions can be directly answered by AI. Our objective was to assess the content, accuracy, and accessibility of AI-generated content regarding common perioperative questions for reduction mammaplasty.

          Methods:

          ChatGPT (OpenAI, February Version, San Francisco, Calif.) was used to query 20 common patient concerns that arise in the perioperative period of a reduction mammaplasty. Searches were performed in duplicate for both a general term and a specific clinical question. Query outputs were analyzed both objectively and subjectively. Descriptive statistics, t tests, and chi-square tests were performed where appropriate with a predetermined level of significance of P less than 0.05.

          Results:

          From a total of 40 AI-generated outputs, mean word length was 191.8 words. Readability was at the thirteenth grade level. Regarding content, of all query outputs, 97.5% were on the appropriate topic. Medical advice was deemed to be reasonable in 100% of cases. General queries more frequently reported overarching background information, whereas specific queries more frequently reported prescriptive information ( P < 0.0001). AI outputs specifically recommended following surgeon provided postoperative instructions in 82.5% of instances.

          Conclusions:

          Currently available AI tools, in their nascent form, can provide recommendations for common perioperative questions and concerns for reduction mammaplasty. With further calibration, AI interfaces may serve as a tool for fielding patient queries in the future; however, patients must always retain the ability to bypass technology and be able to contact their surgeon.

          Related collections

          Most cited references39

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models

          We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations. These results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations

            This paper presents an analysis of the advantages, limitations, ethical considerations, future prospects, and practical applications of ChatGPT and artificial intelligence (AI) in the healthcare and medical domains. ChatGPT is an advanced language model that uses deep learning techniques to produce human-like responses to natural language inputs. It is part of the family of generative pre-training transformer (GPT) models developed by OpenAI and is currently one of the largest publicly available language models. ChatGPT is capable of capturing the nuances and intricacies of human language, allowing it to generate appropriate and contextually relevant responses across a broad spectrum of prompts. The potential applications of ChatGPT in the medical field range from identifying potential research topics to assisting professionals in clinical and laboratory diagnosis. Additionally, it can be used to help medical students, doctors, nurses, and all members of the healthcare fraternity to know about updates and new developments in their respective fields. The development of virtual assistants to aid patients in managing their health is another important application of ChatGPT in medicine. Despite its potential applications, the use of ChatGPT and other AI tools in medical writing also poses ethical and legal concerns. These include possible infringement of copyright laws, medico-legal complications, and the need for transparency in AI-generated content. In conclusion, ChatGPT has several potential applications in the medical and healthcare fields. However, these applications come with several limitations and ethical considerations which are presented in detail along with future prospects in medicine and healthcare.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer

              Consumers are increasingly using artificial intelligence (AI) chatbots as a source of information. However, the quality of the cancer information generated by these chatbots has not yet been evaluated using validated instruments. To characterize the quality of information and presence of misinformation about skin, lung, breast, colorectal, and prostate cancers generated by 4 AI chatbots. This cross-sectional study assessed AI chatbots’ text responses to the 5 most commonly searched queries related to the 5 most common cancers using validated instruments. Search data were extracted from the publicly available Google Trends platform and identical prompts were used to generate responses from 4 AI chatbots: ChatGPT version 3.5 (OpenAI), Perplexity (Perplexity.AI), Chatsonic (Writesonic), and Bing AI (Microsoft). Google Trends’ top 5 search queries related to skin, lung, breast, colorectal, and prostate cancer from January 1, 2021, to January 1, 2023, were input into 4 AI chatbots. The primary outcomes were the quality of consumer health information based on the validated DISCERN instrument (scores from 1 [low] to 5 [high] for quality of information) and the understandability and actionability of this information based on the understandability and actionability domains of the Patient Education Materials Assessment Tool (PEMAT) (scores of 0%-100%, with higher scores indicating a higher level of understandability and actionability). Secondary outcomes included misinformation scored using a 5-item Likert scale (scores from 1 [no misinformation] to 5 [high misinformation]) and readability assessed using the Flesch-Kincaid Grade Level readability score. The analysis included 100 responses from 4 chatbots about the 5 most common search queries for skin, lung, breast, colorectal, and prostate cancer. The quality of text responses generated by the 4 AI chatbots was good (median [range] DISCERN score, 5 [2-5]) and no misinformation was identified. Understandability was moderate (median [range] PEMAT Understandability score, 66.7% [33.3%-90.1%]), and actionability was poor (median [range] PEMAT Actionability score, 20.0% [0%-40.0%]). The responses were written at the college level based on the Flesch-Kincaid Grade Level score. Findings of this cross-sectional study suggest that AI chatbots generally produce accurate information for the top cancer-related search queries, but the responses are not readily actionable and are written at a college reading level. These limitations suggest that AI chatbots should be used supplementarily and not as a primary source for medical information.
                Bookmark

                Author and article information

                Journal
                Plast Reconstr Surg Glob Open
                Plast Reconstr Surg Glob Open
                GOX
                Plastic and Reconstructive Surgery Global Open
                Lippincott Williams & Wilkins (Hagerstown, MD )
                2169-7574
                February 2024
                02 February 2024
                : 12
                : 2
                : e5580
                Affiliations
                From the [* ]Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y.
                []Wellstar Medical College of Georgia, Augusta, Ga.
                Author notes
                Nolan S. Karp, MD, Hansjörg Wyss Department of Plastic Surgery, New York University Langone Health, 305 East 47th Street, Suite 1A, NY, NY 10017, E-mail: Nolan.Karp@ 123456nyulangone.org
                Article
                00008
                10.1097/GOX.0000000000005580
                10836902
                38313585
                db26a3d6-b0b3-4124-97f7-3c3dea0f1920
                Copyright © 2024 The Authors. Published by Wolters Kluwer Health, Inc. on behalf of The American Society of Plastic Surgeons.

                This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

                History
                : 13 November 2023
                : 5 December 2023
                Categories
                Technology
                Original Article
                Custom metadata
                TRUE
                UNITED STATES
                T

                Comments

                Comment on this article