12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This scoping review of randomised controlled trials on artificial intelligence (AI) in clinical practice reveals an expanding interest in AI across clinical specialties and locations. The USA and China are leading in the number of trials, with a focus on deep learning systems for medical imaging, particularly in gastroenterology and radiology. A majority of trials (70 [81%] of 86) report positive primary endpoints, primarily related to diagnostic yield or performance; however, the predominance of single-centre trials, little demographic reporting, and varying reports of operational efficiency raise concerns about the generalisability and practicality of these results. Despite the promising outcomes, considering the likelihood of publication bias and the need for more comprehensive research including multicentre trials, diverse outcome measures, and improved reporting standards is crucial. Future AI trials should prioritise patient-relevant outcomes to fully understand AI’s true effects and limitations in health care.

          Related collections

          Most cited references52

          • Record: found
          • Abstract: found
          • Article: not found

          PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation

          Scoping reviews, a type of knowledge synthesis, follow a systematic approach to map evidence on a topic and identify main concepts, theories, sources, and knowledge gaps. Although more scoping reviews are being done, their methodological and reporting quality need improvement. This document presents the PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) checklist and explanation. The checklist was developed by a 24-member expert panel and 2 research leads following published guidance from the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network. The final checklist contains 20 essential reporting items and 2 optional items. The authors provide a rationale and an example of good reporting for each item. The intent of the PRISMA-ScR is to help readers (including researchers, publishers, commissioners, policymakers, health care providers, guideline developers, and patients or consumers) develop a greater understanding of relevant terminology, core concepts, and key items to report for scoping reviews.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Dissecting racial bias in an algorithm used to manage the health of populations

            Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found
              Is Open Access

              A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis

              Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging.
                Bookmark

                Author and article information

                Journal
                101751302
                48799
                Lancet Digit Health
                Lancet Digit Health
                The Lancet. Digital health
                2589-7500
                27 April 2024
                May 2024
                03 May 2024
                : 6
                : 5
                : e367-e373
                Affiliations
                Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA (R Han MS, P Rajpurkar PhD); Department of Computer Science, Stanford University, Stanford, CA, USA (R Han); University of California Los Angeles–Caltech Medical Scientist Training Program, Los Angeles, CA, USA (R Han); Department of Neurology, Yale School of Medicine, New Haven, CT, USA (J N Acosta MD); Rad AI, San Francisco, CA, USA (J N Acosta); Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada (Z Shakeri PhD); Stanford Prevention Research Center, Department of Medicine (Prof J P A Ioannidis MD DSc), and Meta-Research Innovation Center at Stanford (Prof J P A Ioannidis), Stanford University, Stanford, CA, USA; Scripps Research Translational Institute, Scripps Research, La Jolla, CA, USA (Prof E J Topol MD)
                Author notes

                Contributors

                RH and PR conceptualised the scoping review. JPAI, PR, and EJT supervised the scoping review. RH, JPAI, and PR contributed to the design of the scoping review. RH, JNA, and PR did the screening of the search results and data extraction. RH drafted the manuscript. ZS contributed to the presentation of data. All authors had access to the data, interpreted the analyses, and critically revised and edited the manuscript.

                [*]

                Contributed equally

                Correspondence to: Prof Eric J Topol, Scripps Research Translational Institute, Scripps Research, La Jolla, CA 92037, USA, etopol@ 123456scripps.edu
                Article
                NIHMS1989111
                10.1016/S2589-7500(24)00047-5
                11068159
                38670745
                bc5f27e8-d7bd-449d-811f-cbf686e17ce5

                This is an Open Access article under the CC BY-NC-ND 4.0 license.

                History
                Categories
                Article

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content367

                Cited by26

                Most referenced authors842