16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity

      , , , ,
      Gut
      BMJ

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial intelligence (AI) and machine learning (ML) systems are increasingly used in medicine to improve clinical decision-making and healthcare delivery. In gastroenterology and hepatology, studies have explored a myriad of opportunities for AI/ML applications which are already making the transition to bedside. Despite these advances, there is a risk that biases and health inequities can be introduced or exacerbated by these technologies. If unrecognised, these technologies could generate or worsen systematic racial, ethnic and sex disparities when deployed on a large scale. There are several mechanisms through which AI/ML could contribute to health inequities in gastroenterology and hepatology, including diagnosis of oesophageal cancer, management of inflammatory bowel disease (IBD), liver transplantation, colorectal cancer screening and many others. This review adapts a framework for ethical AI/ML development and application to gastroenterology and hepatology such that clinical practice is advanced while minimising bias and optimising health equity.

          Related collections

          Most cited references92

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The FAIR Guiding Principles for scientific data management and stewardship

          There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Worldwide incidence and prevalence of inflammatory bowel disease in the 21st century: a systematic review of population-based studies.

            Inflammatory bowel disease is a global disease in the 21st century. We aimed to assess the changing incidence and prevalence of inflammatory bowel disease around the world.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Dissecting racial bias in an algorithm used to manage the health of populations

              Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Gut
                Gut
                BMJ
                0017-5749
                1468-3288
                August 11 2022
                September 2022
                September 2022
                June 10 2022
                : 71
                : 9
                : 1909-1915
                Article
                10.1136/gutjnl-2021-326271
                35688612
                85b0d38c-d50a-497d-9d13-83712cc4dfb0
                © 2022

                Comments

                Comment on this article