We propose and test a novel graph learning-based explainable artificial intelligence (XAI) approach to address the challenge of developing explainable predictions of patient length of stay (LoS) in intensive care units (ICUs). Specifically, we address a notable gap in the literature on XAI methods that identify interactions between model input features to predict patient health outcomes. Our model intrinsically constructs a patient-level graph, which identifies the importance of feature interactions for prediction of health outcomes. It demonstrates state-of-the-art explanation capabilities based on identification of salient feature interactions compared with traditional XAI methods for prediction of LoS. We supplement our XAI approach with a small-scale user study, which demonstrates that our model can lead to greater user acceptance of artificial intelligence (AI) model-based decisions by contributing to greater interpretability of model predictions. Our model lays the foundation to develop interpretable, predictive tools that healthcare professionals can utilize to improve ICU resource allocation decisions and enhance the clinical relevance of AI systems in providing effective patient care. Although our primary research setting is the ICU, our graph learning model can be generalized to other healthcare contexts to accurately identify key feature interactions for prediction of other health outcomes, such as mortality, readmission risk, and hospitalizations.
Intensive care units (ICUs) are critical for treating severe health conditions but represent significant hospital expenditures. Accurate prediction of ICU length of stay (LoS) can enhance hospital resource management, reduce readmissions, and improve patient care. In recent years, widespread adoption of electronic health records and advancements in artificial intelligence (AI) have facilitated accurate predictions of ICU LoS. However, there is a notable gap in the literature on explainable artificial intelligence (XAI) methods that identify interactions between model input features to predict patient health outcomes. This gap is especially noteworthy as the medical literature suggests that complex interactions between clinical features are likely to significantly impact patient health outcomes. We propose a novel graph learning-based approach that offers state-of-the-art prediction and greater interpretability for ICU LoS prediction. Specifically, our graph-based XAI model can generate interaction-based explanations supported by evidence-based medicine, which provide rich patient-level insights compared with existing XAI methods. We test the statistical significance of our XAI approach using a distance-based separation index and utilize perturbation analyses to examine the sensitivity of our model explanations to changes in input features. Finally, we validate the explanations of our graph learning model using the conceptual evaluation property (Co-12) framework and a small-scale user study of ICU clinicians. Our approach offers interpretable predictions of ICU LoS grounded in design science research, which can facilitate greater integration of AI-enabled decision support systems in clinical workflows, thereby enabling clinicians to derive greater value.
History: Olivia Sheng, Senior Editor; Abhay Mishra, Associate Editor.
Funding: I. R. Bardhan is grateful for the financial support of the Foster Parker Centennial Professorship and the financial support from the McCombs School of Business [Dean’s Excellence Research Grant].
Supplemental Material: The online appendix is available at https://doi.org/10.1287/isre.2023.0029 .
See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.