3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Assessing the utility of ChatGPT as an artificial intelligence‐based large language model for information to answer questions on myopia

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          ChatGPT is an artificial intelligence language model, which uses natural language processing to simulate human conversation. It has seen a wide range of applications including healthcare education, research and clinical practice. This study evaluated the accuracy of ChatGPT in providing accurate and quality information to answer questions on myopia.

          Methods

          A series of 11 questions (nine categories of general summary, cause, symptom, onset, prevention, complication, natural history, treatment and prognosis) were generated for this cross‐sectional study. Each question was entered five times into fresh ChatGPT sessions (free from influence of prior questions). The responses were evaluated by a five‐member team of optometry teaching and research staff. The evaluators individually rated the accuracy and quality of responses on a Likert scale, where a higher score indicated greater quality of information (1: very poor; 2: poor; 3: acceptable; 4: good; 5: very good). Median scores for each question were estimated and compared between evaluators. Agreement between the five evaluators and the reliability statistics of the questions were estimated.

          Results

          Of the 11 questions on myopia, ChatGPT provided good quality information (median scores: 4.0) for 10 questions and acceptable responses (median scores: 3.0) for one question. Out of 275 responses in total, 66 (24%) were rated very good, 134 (49%) were rated good, whereas 60 (22%) were rated acceptable, 10 (3.6%) were rated poor and 5 (1.8%) were rated very poor. Cronbach's α of 0.807 indicated good level of agreement between test items. Evaluators' ratings demonstrated ‘slight agreement’ (Fleiss's κ, 0.005) with a significant difference in scoring among the evaluators (Kruskal–Wallis test, p < 0.001).

          Conclusion

          Overall, ChatGPT generated good quality information to answer questions on myopia. Although ChatGPT shows great potential in rapidly providing information on myopia, the presence of inaccurate responses demonstrates that further evaluation and awareness concerning its limitations are crucial to avoid potential misinterpretation.

          Related collections

          Most cited references32

          • Record: found
          • Abstract: not found
          • Article: not found

          The Measurement of Observer Agreement for Categorical Data

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Global Prevalence of Myopia and High Myopia and Temporal Trends from 2000 through 2050.

            Myopia is a common cause of vision loss, with uncorrected myopia the leading cause of distance vision impairment globally. Individual studies show variations in the prevalence of myopia and high myopia between regions and ethnic groups, and there continues to be uncertainty regarding increasing prevalence of myopia.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns

              ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Ophthalmic and Physiological Optics
                Ophthalmic Physiologic Optic
                Wiley
                0275-5408
                1475-1313
                November 2023
                July 21 2023
                November 2023
                : 43
                : 6
                : 1562-1570
                Affiliations
                [1 ] School of Optometry, College of Health and Life Sciences Aston University Birmingham UK
                Article
                10.1111/opo.13207
                4acd9a2c-58f8-46d5-9e1c-198fc8bcb1d1
                © 2023

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article