42
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

      research-article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems’ black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.

          Highlights

          • We performed a mini-review for XAI in medicine and digital healthcare.

          • Our mini-review is comprehensive on most recent studies of XAI in the medical field.

          • We proposed two XAI methods and constructed two representative showcases.

          • One of our XAI models is based on weakly supervised learning for COVID-19 classification.

          • One of our XAI models is developed for ventricle segmentation in hydrocephalus patients.

          Related collections

          Most cited references216

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age

            Cathie Sudlow and colleagues describe the UK Biobank, a large population-based prospective study, established to allow investigation of the genetic and non-genetic determinants of the diseases of middle and old age.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A survey on deep learning in medical image analysis

              Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.
                Bookmark

                Author and article information

                Contributors
                Journal
                Inf Fusion
                Inf Fusion
                An International Journal on Information Fusion
                Elsevier
                1566-2535
                1872-6305
                1 January 2022
                January 2022
                : 77
                : 29-52
                Affiliations
                [a ]National Heart and Lung Institute, Imperial College London, London, UK
                [b ]Royal Brompton Hospital, London, UK
                [c ]Imperial Institute of Advanced Technology, Hangzhou, China
                [d ]Hangzhou Ocean’s Smart Boya Co., Ltd, China
                [e ]University of California, San Diego, La Jolla, CA, USA
                [f ]Radiology Department, Shenzhen Second People’s Hospital, Shenzhen, China
                Author notes
                Article
                S1566-2535(21)00159-7
                10.1016/j.inffus.2021.07.016
                8459787
                9b3c0ae1-659a-4645-8325-2f99b0598960
                © 2021 The Authors

                This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

                History
                : 27 January 2021
                : 25 May 2021
                : 25 July 2021
                Categories
                Article

                explainable ai,information fusion,multi-domain information fusion,weakly supervised learning,medical image analysis

                Comments

                Comment on this article