3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment

      brief-report

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT’s assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research.

          Suicidal thoughts and behaviors (STBs) are major public health problems that have not declined appreciably in several decades. One of the first steps to improving the prevention and treatment of STBs is to establish risk factors (i.e., longitudinal predictors). To provide a summary of current knowledge about risk factors, we conducted a meta-analysis of studies that have attempted to longitudinally predict a specific STB-related outcome. This included 365 studies (3,428 total risk factor effect sizes) from the past 50 years. The present random-effects meta-analysis produced several unexpected findings: across odds ratio, hazard ratio, and diagnostic accuracy analyses, prediction was only slightly better than chance for all outcomes; no broad category or subcategory accurately predicted far above chance levels; predictive ability has not improved across 50 years of research; studies rarely examined the combined effect of multiple risk factors; risk factors have been homogenous over time, with 5 broad categories accounting for nearly 80% of all risk factor tests; and the average study was nearly 10 years long, but longer studies did not produce better prediction. The homogeneity of existing research means that the present meta-analysis could only speak to STB risk factor associations within very narrow methodological limits-limits that have not allowed for tests that approximate most STB theories. The present meta-analysis accordingly highlights several fundamental changes needed in future studies. In particular, these findings suggest the need for a shift in focus from risk factors to machine learning-based risk algorithms. (PsycINFO Database Record
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models

            We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations. These results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Suicide prevention strategies revisited: 10-year systematic review

              Many countries are developing suicide prevention strategies for which up-to-date, high-quality evidence is required. We present updated evidence for the effectiveness of suicide prevention interventions since 2005.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychiatry
                Front Psychiatry
                Front. Psychiatry
                Frontiers in Psychiatry
                Frontiers Media S.A.
                1664-0640
                01 August 2023
                2023
                : 14
                : 1213141
                Affiliations
                [1] 1Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College , Emek Yezreel, Israel
                [2] 2Department of Brain Sciences, Faculty of Medicine, Imperial College London , London, United Kingdom
                [3] 3Faculty of Graduate Studies, Oranim Academic College , Kiryat Tiv'on, Israel
                Author notes

                Edited by: Beth Krone, Icahn School of Medicine at Mount Sinai, United States

                Reviewed by: M. David Rudd, University of Memphis, United States; Amna Mohyud Din Chaudhary, Case Western Reserve University, United States

                *Correspondence: Zohar Elyoseph, zohare@ 123456yvc.ac.il

                These authors have contributed equally to this work

                Article
                10.3389/fpsyt.2023.1213141
                10427505
                37593450
                4ef85ab6-8f7b-45b5-9aa9-19b9f1812ff2
                Copyright © 2023 Elyoseph and Levkovich.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 28 April 2023
                : 19 July 2023
                Page count
                Figures: 2, Tables: 1, Equations: 0, References: 40, Pages: 7, Words: 5195
                Categories
                Psychiatry
                Brief Research Report
                Custom metadata
                Digital Mental Health

                Clinical Psychology & Psychiatry
                artificial intelligence,chatgpt,diagnosis,psychological assessment,suicide risk,risk assessment,text vignette

                Comments

                Comment on this article